|
6c6c2852
|
2012-07-14T22:25:41
|
|
Show Khash hashrates when scrypt is in use.
|
|
54f1b808
|
2012-07-14T22:19:55
|
|
Free the scratchbuf memory allocated in scrypt and don't check if CPUs are sick since they can't be. Prepare for khash hash rates in display.
|
|
a5ebb712
|
2012-07-14T22:01:20
|
|
Add cpumining capability for scrypt.
|
|
41daf995
|
2012-07-14T09:45:55
|
|
Calculate midstate in separate function and remove likely/unlikely macros since they're dependent on pools, not code design.
|
|
8230ab05
|
2012-07-14T01:10:50
|
|
Display in debug mode when we're making the midstate locally.
|
|
ea444d02
|
2012-07-14T00:59:38
|
|
Fix nonce submission code for scrypt.
|
|
0f43eb5e
|
2012-07-13T20:35:44
|
|
Don't test nonce with sha and various fixes for scrypt.
|
|
dd740caa
|
2012-07-13T19:02:43
|
|
Provide initial support for the scrypt kernel to compile with and mine scrypt with the --scrypt option.
|
|
cbef2a6a
|
2012-07-12T16:40:15
|
|
Only try to shut down work cleanly if we've successfully connected and started mining.
|
|
c57c308d
|
2012-07-11T20:29:06
|
|
Bugfix: Don't declare devices SICK if they're just busy initializing
This mainly applies to ModMiner since it takes 4-5 minutes to upload the bitstream
|
|
5c7e0308
|
2012-07-11T22:59:58
|
|
Modify te scanhash API to use an int64_t and return -1 on error, allowing zero to be a valid return value.
|
|
f9d0324d
|
2012-07-11T22:47:03
|
|
Check for work restart after the hashmeter is invoked for we lose the hashes otherwise contributed in the count.
|
|
1d153a14
|
2012-07-11T22:43:21
|
|
Remove disabled: label from mining thread function, using a separate mt_disable function.
|
|
af809b79
|
2012-07-11T22:36:45
|
|
Style changes.
|
|
2ce7f28b
|
2012-07-08T04:24:55
|
|
Merge pull request #254 from luke-jr/work_restart2
Turn work_restart array into a bool in thr_info
|
|
ad02627e
|
2012-07-06T19:35:28
|
|
Fix --benchmark not working since the dynamic addition of pools and pool stats.
|
|
fd55fab9
|
2012-07-06T16:54:00
|
|
Make bitforce nonce range support a command line option --bfl-range since enabling it decrease hashrate by 1%.
|
|
d4af2d05
|
2012-07-06T02:39:32
|
|
Turn work_restart array into a bool in thr_info
|
|
274a4011
|
2012-07-05T16:45:05
|
|
Merge branch 'master' into mr
|
|
75eca078
|
2012-07-05T09:15:21
|
|
Restart_wait is only called with a ms value so incorporate that into the function.
|
|
8bc7d1c9
|
2012-07-05T08:59:09
|
|
Only try to adjust dev width when curses is built in.
|
|
67e92de1
|
2012-07-04T15:16:39
|
|
Adjust device width column to be consistent.
|
|
ce93c2fc
|
2012-07-04T14:40:02
|
|
Use cgpu-> not gpus[] in watchdog thread.
|
|
7ada258b
|
2012-07-03T11:04:44
|
|
Merge branch 'master' into bfl
|
|
610cf0f0
|
2012-07-03T10:48:42
|
|
Minor style changes.
|
|
aaa9f62b
|
2012-07-03T01:01:37
|
|
Made JSON error message verbose.
|
|
ce850883
|
2012-07-01T23:39:09
|
|
Merge branch 'master' of git://github.com/ckolivas/cgminer.git
Conflicts:
driver-bitforce.c
|
|
cc0ad5ea
|
2012-07-01T23:35:06
|
|
Merge branch 'master' of git://github.com/ckolivas/cgminer.git
Conflicts:
driver-bitforce.c
|
|
ac45260e
|
2012-07-02T16:12:35
|
|
Random style cleanups.
|
|
06ec47b3
|
2012-07-02T12:45:16
|
|
Must always unlock mutex after cond timedwait.
|
|
df5d196f
|
2012-07-02T12:37:15
|
|
Must unlock mutex if pthread_cond_wait succeeds.
|
|
fd7b21ed
|
2012-07-02T10:54:20
|
|
Use a pthread conditional that is broadcast whenever work restarts are required. Create a generic wait function waiting a specified time on that conditional that returns if the condition is met or a specified time passed to it has elapsed. Use this to do smarter polling in bitforce to abort work, queue more work, and check for results to minimise time spent working needlessly.
|
|
a4a2000c
|
2012-06-30T20:45:56
|
|
Merge branch 'master' of git://github.com/ckolivas/cgminer.git
|
|
830f2902
|
2012-07-01T11:09:06
|
|
Numerous style police clean ups in cgminer.c
|
|
1e942147
|
2012-07-01T10:44:23
|
|
Timersub is supported on all build platforms so do away with custom timerval_subtract function.
|
|
efaa7398
|
2012-06-30T11:59:53
|
|
Tweak sick/dead logic
(remove pre-computed time calculations)
|
|
86c8bbe5
|
2012-06-29T17:19:28
|
|
Need to run Hashmeter all the time.
and not just if logging/display is enabled
|
|
75a651c1
|
2012-06-28T16:08:10
|
|
Revert "Check for submit_stale before checking for work_restart"
Makes no sense to continue working on the old block whether submit_stale is enabled or not.
|
|
baa480c1
|
2012-06-28T08:22:55
|
|
Merge branch 'master' of git://github.com/ckolivas/cgminer.git
Conflicts:
cgminer.c
|
|
f2253929
|
2012-06-28T08:20:45
|
|
Add low hash threshold in sick/dead processing
Add check for fd in comms procedures
|
|
3267b534
|
2012-06-28T10:43:52
|
|
Implement rudimentary X-Mining-Hashrate support.
|
|
4c5d41a8
|
2012-06-27T16:03:46
|
|
Merge pull request #243 from kanoi/master
define, implement and document API option --api-groups
|
|
862a362b
|
2012-06-27T14:55:44
|
|
Merge branch 'master' of git://github.com/ckolivas/cgminer.git
|
|
24316fc7
|
2012-06-28T07:27:57
|
|
Revert "Work is checked if it's stale elsewhere outside of can_roll so there is no need to check it again."
This reverts commit 5ad58f9a5ce1a6b99f3011e1811fa01040d12aa2.
|
|
62c3c66f
|
2012-06-27T08:18:12
|
|
Merge branch 'master' of git://github.com/ckolivas/cgminer.git
|
|
5ad58f9a
|
2012-06-27T23:36:48
|
|
Work is checked if it's stale elsewhere outside of can_roll so there is no need to check it again.
|
|
eddd02fe
|
2012-06-27T23:32:50
|
|
Put upper bounds to under 2 hours that work can be rolled into the future for bitcoind will deem it invalid beyond that.
|
|
bcec5f51
|
2012-06-27T23:30:50
|
|
Revert "Check we don't exhaust the entire unsigned 32 bit ntime range when rolling time to cope with extremely high hashrates."
This reverts commit 522f620c89b5f152f86a2916b0dca7b71b2a5005.
Unrealistic. Limits are bitcoind related to 2 hours in the future.
|
|
383d35b2
|
2012-06-27T22:35:38
|
|
Merge branch 'master' of github.com:ckolivas/cgminer
|
|
522f620c
|
2012-06-27T22:34:46
|
|
Check we don't exhaust the entire unsigned 32 bit ntime range when rolling time to cope with extremely high hashrates.
|
|
c21fc065
|
2012-06-27T21:28:18
|
|
define API option --api-groups
|
|
794b6558
|
2012-06-27T10:55:50
|
|
Merge branch 'master' of https://github.com/ckolivas/cgminer
|
|
21a23a45
|
2012-06-27T10:15:57
|
|
Work around pools that advertise very low expire= time inappropriately as this leads to many false positives for stale shares detected.
|
|
d3e2b62c
|
2012-06-26T14:45:48
|
|
Change sick/dead processing to use device pointer, not gpu array.
Change BFL timing to adjust only when hashing complete (not error/idle etc.).
|
|
78d5a81d
|
2012-06-26T12:32:09
|
|
Merge branch 'master' of https://github.com/ckolivas/cgminer.git
|
|
68a3a9ad
|
2012-06-26T22:37:24
|
|
There is no need for work to be a union in struct workio_cmd
|
|
b198423d
|
2012-06-26T16:01:06
|
|
Don't keep rolling work right up to the expire= cut off. Use 2/3 of the time between the scantime and the expiry as cutoff for reusing work.
|
|
6e80b63b
|
2012-06-26T15:43:03
|
|
Revert "Increase the getwork delay factored in to determine if work vs share is stale to avoid too tight timing."
This reverts commit d8de1bbc5baa416148f50938cfde28a5261cb0e1.
Wrong fix.
|
|
d8de1bbc
|
2012-06-26T13:07:08
|
|
Increase the getwork delay factored in to determine if work vs share is stale to avoid too tight timing.
|
|
1ef52e0b
|
2012-06-25T19:23:10
|
|
Check for submit_stale before checking for work_restart
(to keep Kano happy)
|
|
df9e76bd
|
2012-06-25T10:56:04
|
|
Merge branch 'master' of https://github.com/ckolivas/cgminer.git
|
|
90d82aa6
|
2012-06-25T10:27:08
|
|
Revert to pre pool merge
|
|
c027492f
|
2012-06-25T17:06:26
|
|
Make the pools array a dynamically allocated array to allow unlimited pools to be added.
|
|
5cf4b7c4
|
2012-06-25T16:59:29
|
|
Make the devices array a dynamically allocated array of pointers to allow unlimited devices.
|
|
17ba2dca
|
2012-06-25T10:51:45
|
|
Logic fail on queueing multiple requests at once. Just queue one at a time.
|
|
42ea29ca
|
2012-06-25T00:58:18
|
|
Use a queueing bool set under control_lock to prevent multiple calls to queue_request racing.
|
|
63dd598e
|
2012-06-25T00:42:51
|
|
Queue multiple requests at once when levels are low.
|
|
757922e4
|
2012-06-25T00:33:47
|
|
Use the work clone flag to determine if we should subtract it from the total queued variable and provide a subtract queued function to prevent looping over locked code.
|
|
49dd8fb5
|
2012-06-25T00:25:38
|
|
Don't decrement staged extras count from longpoll work.
|
|
d93e5f71
|
2012-06-25T00:23:58
|
|
Count longpoll's contribution to the queue.
|
|
05bc638d
|
2012-06-25T00:08:50
|
|
Increase queued count before pushing message.
|
|
32f52721
|
2012-06-25T00:03:37
|
|
Revert "With better bounds on the amount of work cloned, there is no need to age work and ageing it was picking off master work items that could be further rolled."
This reverts commit 5d90c50fc08644c9b0c3fb7d508b2bc84e9a4163.
|
|
5d90c50f
|
2012-06-24T23:38:24
|
|
With better bounds on the amount of work cloned, there is no need to age work and ageing it was picking off master work items that could be further rolled.
|
|
47f66405
|
2012-06-24T23:10:02
|
|
Alternatively check staged work count for rolltime capable pools when deciding to queue requests.
|
|
efa9569b
|
2012-06-24T22:59:56
|
|
Test we have enough work queued for pools with and without rolltime capability.
|
|
1bbc860a
|
2012-06-24T22:47:51
|
|
Don't count longpoll work as a staged extra work.
|
|
ebaa615f
|
2012-06-24T22:16:04
|
|
Count extra cloned work in the total queued count.
|
|
74cd6548
|
2012-06-24T22:00:37
|
|
Use a static base measurement difference of how many items to clone since requests_staged may not climb while rolling.
|
|
7b57df11
|
2012-06-24T21:58:52
|
|
Allow 1/3 extra buffer of staged work when ageing it.
|
|
53269a97
|
2012-06-24T21:57:49
|
|
Revert "Simplify the total_queued count to those staged not cloned and remove the locking since it's no longer a critical value."
This reverts commit 9f811c528f6eefbca5f16c92181783f756e3a68f.
|
|
a05c8e3f
|
2012-06-24T21:57:18
|
|
Revert "Take into account total_queued as well when deciding whether to queue a fresh request or not."
This reverts commit b20089fdb70a52ec029375beecebfd47efaee218.
|
|
750474bc
|
2012-06-24T21:56:53
|
|
Revert "Further simplify the total_queued counting mechanism and do all dec_queued from the one location."
This reverts commit 790acad9f9223e4d532d8d38e00737c79b8e40fb.
|
|
d2c1a6bd
|
2012-06-24T21:56:36
|
|
Revert "Make sure to have at least one staged work item when deciding whether to queue another request or not and dec queued in free work not discard work."
This reverts commit c8601722752bcc6d3db7efd0063f7f2d7f2f7d2a.
|
|
c8601722
|
2012-06-24T21:52:07
|
|
Make sure to have at least one staged work item when deciding whether to queue another request or not and dec queued in free work not discard work.
|
|
790acad9
|
2012-06-24T21:42:34
|
|
Further simplify the total_queued counting mechanism and do all dec_queued from the one location.
|
|
b20089fd
|
2012-06-24T20:59:55
|
|
Take into account total_queued as well when deciding whether to queue a fresh request or not.
|
|
ded16838
|
2012-06-24T20:48:02
|
|
Add the getwork delay time instead of subtracting it when determining if a share is stale.
|
|
b5757d12
|
2012-06-24T20:45:47
|
|
Don't count getwork delay when determining if shares are stale.
|
|
9f811c52
|
2012-06-24T20:38:40
|
|
Simplify the total_queued count to those staged not cloned and remove the locking since it's no longer a critical value.
Clone only anticipated difference sicne there will be a lag from the value returned by requests_staged().
Keep 1/3 buffer of extra work items when ageing them.
|
|
411784a9
|
2012-06-24T19:53:31
|
|
As work is sorted by age, we can discard the oldest work at regular intervals to keep only 1 of the newest work items per mining thread.
|
|
359635a8
|
2012-06-24T18:44:09
|
|
Only roll enough work to have one staged work for each mining thread.
|
|
0c970bbd
|
2012-06-24T18:22:20
|
|
Roll work again after duplicating it to prevent duplicates on return to the clone function.
|
|
610302af
|
2012-06-24T18:10:17
|
|
Abstract out work cloning and clone $mining_threads copies whenever a rollable work item is found and return a clone instead.
|
|
a8ae1a43
|
2012-06-24T14:38:31
|
|
Rolltime should be used as the cutoff time for primary work as well as the rolled work, if present.
|
|
c20a89d9
|
2012-06-24T14:20:29
|
|
Take into account average getwork delay as a marker of pool communications when considering work stale.
|
|
f32ffb87
|
2012-06-24T13:20:17
|
|
Work out a rolling average getwork delay stored in pool_stats.
|
|
4e60a62a
|
2012-06-24T12:55:56
|
|
Getwork delay in stats should include retries for each getwork call.
|
|
6a45cbbd
|
2012-06-23T23:45:08
|
|
Merge branch 'master' of https://github.com/ckolivas/cgminer
|
|
c5a21fab
|
2012-06-23T23:43:22
|
|
Extend nrolltime to support the expiry= parameter. Do this by turning the rolltime bool into an integer set to the expiry time. If the pool supports rolltime but not expiry= then set the expiry time to the standard scantime.
|
|
6ce4871b
|
2012-06-20T15:49:51
|
|
Merge branch 'conf_pools'
|