Sasa Posted May 2, 2016 Report Share Posted May 2, 2016 Ideja sekojoša veikt pieprasījumus uz vairākiem URL'iem lai process būtu krietni ātrāks visu var paralēli laist un tad rezultātus apvienot. Esmu ticis tik tālu: import threading import time import logging logging.basicConfig(level=logging.DEBUG, format='(%(threadName)-9s) %(message)s',) class ThreadPool(object): def __init__(self): super(ThreadPool, self).__init__() self.active = [] self.lock = threading.Lock() def make_active(self, name): with self.lock: self.active.append(name) logging.debug('Running: %s', self.active) def make_inactive(self, name): with self.lock: self.active.remove(name) logging.debug('Running: %s', self.active) def threader(semaphore, pool, results): logging.debug('Waiting to join the pool') with semaphore: name = threading.currentThread().getName() pool.make_active(name) #time.sleep(3) results.append(name) pool.make_inactive(name) if __name__ == '__main__': results = [] pool = ThreadPool() semaphore = threading.Semaphore(7) threads = [] start_time = time.time() for i in range(20): t = threading.Thread(target=threader, name='thread_'+str(i), args=(semaphore, pool, results)) t.start() threads.append(t) for t in threads: t.join() print results print("--- %s seconds ---" % (time.time() - start_time)) Vēl ir nepieciešams ierobežot vienlaicīgo pieprasījumu skaitu. Tik cik šobrīd izskatās ka viss strādā. Vai ko var uzlabot? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.