Afişează mesaje
Pagini: [1] 2 3 4
1  Comunitate - feedback, proiecte si distractie / Off topic / Răspuns: Top 10 probleme din arhiva de probleme 2017 : Aprilie 02, 2017, 14:41:30
Invsort - Cazul de 50 de puncte chiar e un hint foarte bun
Hallway - Misto sa intelegi cautarea binara.
Dsip - Pentru ca merge cu SSE Very Happy
Xormax - Chiar invatai trie-uri de la ea, acum e cam cliseu.
Radio2 - Imi place cand e inputul random.
Sortnet - Patrascu are cele mai interesante probleme parca

Mai las spatiu pentru adaugare, le-am zis pe cele pe care mi-au venit in minte.
2  infoarena - concursuri, probleme, evaluator, articole / ONIS 2016 / Răspuns: Feedback Nationala ACM & Runda 2 : Iunie 12, 2016, 04:11:51
Sebi, tu nu ai zis nimic concret (doar niste idei), de asta iti zice Petru ca ar trebui sa te implici.

E foarte usor sa zici ca trebuie facut X sau Y, e mult mai greu de facut asta in practica. E foarte greu sa spui concret cum anume s-ar putea rezolva o anumita problema. Ti-ai fi dat seama natural de asta daca incercai sa participi la organizarea unui concurs.

De exemplu ziceai ca ar trebui orice comisie sa stie absolut orice problema data la vreun concurs public de pana acum. Ar insemna  sa stie comisia in ordinul zecilor de mii de probleme (atatea area SPOJ de exemplu) iar asta e probabil dimensiunea vocabularului tau de cuvinte ca om normal. Nu ti se pare cam greu?
3  Comunitate - feedback, proiecte si distractie / Blog / Răspuns: Shortest Snippet : Iunie 13, 2015, 22:39:48
To me "very big file" sounds like "we can't store it in memory", so even O(T) memory is too much.
I'd try a simple online algorithm, that goes something like this:
For each length L from 1 to P-1, keep track of the earliest index where you could find the first L characters in order. When you process a character c, just try to update all position in your string where c occurs. This is worst case O(T * P) but on average you could get it very close to O(T), with maybe a couple optimization for frequent letters.
In fact, it is O(T) when all the P letters are distinct.
4  Comunitate - feedback, proiecte si distractie / Blog / Răspuns: Probability puzzle: Cars : Noiembrie 14, 2014, 19:08:07
The problem’s been solved, but I’d like to post some insight into how you can do a Fermi-type estimation here:

The slowest car will block all other cars behind it, so about half of the cars are probably stuck in the biggest cluster. Since picking the largest cluster at each step removes a percentage of the total amount of cars (the percentage doesn't even matter), the answer is in the order of O(logN), which is ok to use as a rough approximation when the real answer is ~lnN (the value of the sum equals this when N is infinite).

Of course it’s always better to do the math, since it's just intuition that can be wrong, but this type of analysis is useful when it’s impractical to do the math or you just want a gut feeling if something is possible. If for instance you're removing chunks of something in the order of sqrt(N) cars at a time, then you'd be done in O(sqrt(N)) iterations, which was the case in Alex's assumption.
5  Comunitate - feedback, proiecte si distractie / Blog / Răspuns: Probability shortlist : Iunie 18, 2014, 16:17:11
Some of these are pretty classic (I think 4 should be toughened a bit to require an O(N) algorithm), so I'll skip some I know and some I actually have to think about more to be sure.

2) Create a function rand25() from two rand5() calls: rand25() = rand5()*5 + rand5().
While rand25() returns a number greater than the biggest multiple of 7 in your range (21), ignore it.
In the end, return this number mod 7.
This works because you now generate a uniform distribution of length longer than 7, and you just ignore the extremities to get a multiple of 7 number of options.

3) I think Bondariuc Dan's solution is almost right: you need to generate the distance from the center proportional to the "amount" of points at that distance. I know you have infinite points at a certain distance, but the probability of picking a point at distance d should be equal to the circumference of the circle with radius d (there's a "bigger" infinity at distance d, to put it intuitively).
So, the solution would be:
- generate u uniformly in [0, 2*pi)
- generate rs uniformly in [0, r*r], and take d = sqrt(rs)
The point you want should be at (x + d*sin(u), y + d * cos(u))
6  infoarena - concursuri, probleme, evaluator, articole / Arhiva de probleme / Răspuns: 895 Secv8 : Aprilie 18, 2014, 02:40:24
Am aruncat o privire scurta pe sursa ta, si se pare ca ce faci acolo e treap cu prioritati constante (inserezi tot timpul cu prioritate=2), care deci e un arbore binar neechilibrat.
Daca ai incercat sa bagi rand() in loc de 2, s-ar putea sa mearga altfel lucrurile.
7  infoarena - concursuri, probleme, evaluator, articole / Arhiva educationala / Răspuns: 007 Arbori de intervale : Aprilie 03, 2014, 20:16:58
De fapt 3*N nu e de ajung tot timpul, 4*N e mai sigur.
Concret, nivelul K are 2^K noduri, deci in total o sa fie 2*Pow-1 noduri in arbore, unde Pow e cea mai mica putere a lui 2, mai mare sau egala cu N.
Cand N e de forma 2^K+1, ai nevoie de cam 4*N elemente.
Pe scurt, 4*N+10 e cel mai sigur, si nu iti mai faci griji de cazuri particulare. Niciodata nu e recomandat sa bagi limite stricte la array-uri decat atunci cand chiar esti fortat.
8  infoarena - concursuri, probleme, evaluator, articole / Arhiva de probleme / Răspuns: 347 Badea : Februarie 28, 2014, 10:37:19
The seventy-seventh job of the seventh year: http://www.infoarena.ro/job_detail/1129943
Trebuie sa prinzi seed-ul bun Cataline.
E cum se spune in pilda semanatorului, Marcu, 4:20 - "Others are like the seeds sown on good soil. They hear the word, accept it, and produce crops—30, 60, or 100 times what was sown." Wink
Bine, si cam trebuie mesterit la conditiile de terminare o gramada. Cred ca se poate si mai bine, dar elegant din pacate nu am facut codul.
9  infoarena - concursuri, probleme, evaluator, articole / Algoritmiada 2013 / Răspuns: Rama : Martie 24, 2013, 16:16:32
Sunt curios de solutia N^2 log N, suna misto.
Eu chiar am presupus ca era limita pentru N^3, ca intra in timp, doar ca am avut un bug la sursa trimisa in concurs care nu s-a manifestat pe niciunul din testele open Very Happy
10  infoarena - concursuri, probleme, evaluator, articole / Algoritmiada 2013 / Răspuns: Algoritmiada 2013, Runda 2 : Ianuarie 21, 2013, 17:38:15
Felicitari, chiar mi-a placut runda.
Solutia mea din concurs la CityLog a fost tot un logaritm cu baza mare, si imi pare rau ca solutia comisiei nu a putut sa departajeze solutiile cu memorie chiar O(N), trebuiau limite mai stranse la memorie.
Se putea o solutie buna si cu 2 intregi pe nod (am cam spamat evaluatorul pentru testare):

http://infoarena.ro/job_detail/861474

Pentru fiecare nod am retinut tatal sau, sau predecesorul cu 2^K nivele mai sus (unde K e aleator ales intre 1 si logN).
Daca trebuie sa sar >= 2^K nivele, atunci ma duc la predecesor, altfel ma duc in tata (care are sper un K mai mic). Complexitatea e undeva intre O(logN) si O((logN)^2)
Mergea pusa asa limita de memorie la 1 - 1.1MB, si era mult mai greu pentru solutiile cu baza mare sa ia puncte.
11  Comunitate - feedback, proiecte si distractie / Blog / Răspuns: Combinatorics shortlist : Decembrie 15, 2012, 04:19:03
7) This one is really easy once you rewrite it as:
for i1 = 1,n
_for i2 = i1+1,n+1
__for i3 = i2+1,n+2
____…
_____for ik = ik–1 + 1,n+k-1
______print ‘*’
To anyone that has implemented enough brute-force solutions the answer is obviously N+K-1 choose K.

13) You can only have length 1 or 2 cycles in your permutation. So you can generate a N sized permutation from a N-1 permutation by adding N in a 1-length cycle, or from a N-2 permutation by adding the element N in a cycle with N-1 possible other elements and incrementing all elements in between.
So unless I’m missing something Sol(N) = Sol(N-1) + (N-1)*Sol(N-2)

15) I only have to add to Mugurel’s solution that you can get away without using large numbers if you use the logarithm of the prime numbers and try to maximize their sum to maximize the smallest common multiple.
You may get an issue if 2 possible values are closer than the floating point calculation precision, so when the value you’re improving on and the new potential best are closer than the precision for the log operation (less than 1e-9) times the maximum number of distinct prime numbers less than N. In that case you can calculate the actual values by reconstructing the solution. This probably doesn’t happen in practice though.
As a useless theoretical point, since you know that the complexity of the algorithm is polynomial, say O(N^k), if you use (k+1)-word sized floating point numbers, that will give an amortized constant time number of actual calculations with the big numbers.
I called a word the type on which you store N, so a 32 bit integer here.
12  Comunitate - feedback, proiecte si distractie / Blog / Răspuns: Coding Cotest Byte: The Square Root Trick : Iulie 25, 2012, 01:27:56
@Adrian
Your solution might be correct, but I'm not sure how you determine the exact new cost, since I don't understand how you're keeping track of which edge gets removed after a newly inserted one. From what I can tell you're removing them all together, and you can have a query edge inserted and have it bumped out in the same batch. My solution was somewhat different, but based on the same splitting into sqrt(N) chunks of queries. You can get O(sqrt(N log *N)) in the way I describes in the post before: pick the B that minimizes M * O(B) + M/B * O(N log *N), and that is your optimal chunk size, in this case sqrt(N log *N).

What I did was see which edges don't change after the B insertions and merge the nodes they connected, to get a tree with at most B nodes (it's the number of removed edges + 1). After that, I inserted the query edges in this newly reduces graph in linear time. Your solution might work, you can try implementing and seeing if it's fine Wink

@Andrei
I think you misunderstood me. I wasn't suggesting you should keep the entire array sorted, but that for each sqrt(N) sized bucket you should keep a sorted array with those values. Pretty much replace each binary tree you have for a section with just keeping a sorted copy of the values there.
You'll have to do a binary search in every bucket, and every time you change a value in a bucket you'll have to shift all the elements between the last value and the new one, thus making an insert cost O(B), where B is your bucket size.
I was saying this has a significant advantage over keeping binary trees because it simplifies the implementation (I don't think STL has a data structure for "how many elements are less than k?") and also because it is very fast due to excellent cache behaviour. The update is just a memory shift by 4 bytes of a continuous section of on average B/2 integers.
The binary search may seem like it will produce cache misses like the binary tree, but it's actually remarkably well behaved: there are several hot-spots in the array (middle of the sequence, 3/4 point, etc) which are almost always in cache, the end comparisons which happen with closely packed elements, and just a few random accesses in the array. Binary trees keep some of the hot-spot property, but still need more RAM accesses. One real-life benchmark result that shocked me, and that I always remember for this situations is that reading single bytes randomly from memory is about as slow as reading them sequentially from disk.

Since I've rambled a bit besides the point, I have to point out a neat "bulaneala" (dubious solution) the range median problem has: pick a few (say 32-64) elements from your query interval and select the 4-5 values in the middle as a starting interval for your median binary search, instead of the whole interval. Most of the time, the median will be in this sampled range, and in the worst case, you just made an extra meta-comparison. This way, you'll get much fewer median tests overall (by a factor of 4-5 maybe even).
13  Comunitate - feedback, proiecte si distractie / Blog / Răspuns: Coding Cotest Byte: The Square Root Trick : Iulie 24, 2012, 13:29:19
Good ol' sqrt(N) solutions, they bring back some memories.
I woudn't even call it a trick, more of a general balancing technique, where you try to find optimal trade-offs.
The median problem for instance can get O(sqrt(N) * sqrt(log N) * log U) per operation, if you pick your block length as sqrt(N log N).
If your block length is B, every operation outside a block takes O(B) time, plus O(N/B * log N) for all the blocks, where B is the block size. When you try to minimize for both you'll get sqrt(N log N).
In practice though, this is one of those problems where you just have to try multiple constant factors, and see which ones work best.
Also, a much better replacement for the binary trees Andrei was thinking of for the blocks would be plain sorted arrays. Lots of binary searches and some inserts with a memmove are very well suited for modern CPUs.

A nice problem I proposed here that was meant to be solved with a sqrt(N) trick was http://infoarena.ro/problema/rutier
Since I don't think anybody solved it entirely, and the topic is live, I though of doing some shameless promoting.
The task asks that given a graph where edges are inserted one by one, find the cost of the minimum spanning tree after each new edge.
Sugested complexity per operation: O(sqrt(N log*N))
14  infoarena - concursuri, probleme, evaluator, articole / Arhiva educationala / Răspuns: 007 Arbori de intervale : Aprilie 29, 2008, 23:27:53
As baga eu un articol despre cache si ceva optimizari in general, daca se ofera cineva sa-l formateze frumos. Smile
15  infoarena - concursuri, probleme, evaluator, articole / Arhiva educationala / Răspuns: 007 Arbori de intervale : Aprilie 24, 2008, 21:10:27
un AIB e mult mai cache-friendly decat un arbore de intervale.
cum in ultimii ani viteza procesoarelor a crescut mult mai repede ca viteza memoriilor, e din ce in ce mai important cache-ul.

Stiu ca multi din cei ce fac jocuri pentru Xbox / PS3 transforma in array-uri simple multe structuri de date avansate mai rapide asimptotic dar care sparg des cache-ul pentru ca au procesoare foarte rapide dar RAM mult mai incet.

In concluzie, folositi varianta in O(sqrt(N)) Smile

http://infoarena.ro/job_detail/154183
16  infoarena - concursuri, probleme, evaluator, articole / Informatica / Răspuns: Niste intrebari de optimizare : Martie 11, 2008, 21:41:07
Daca nu vrei sa spagi cache-ul ar trebui sa fie mai eficienta varianta de struct in medie pentru ca atunci cand ai nevoie de x, probabil ai nevoie si de y si nu ai decat o citire din RAM daca nu sunt in cache.

Oricum, la 100 de elemente chiar nu ar trebui sa conteze ca intra oricum totul in cache.

La transmiterea parametrilor pentru intregi/reale nu ar trebui sa conteze, oricum optimizeaza bine compilatorul. Pentru structuri mari de date ai grija ca poate dura ceva sa le incarce in stiva asa ca foloseste transmiterea prin referinta unde e sigur.
17  Comunitate - feedback, proiecte si distractie / Imbunatatire teste / Răspuns: 042 Xor Max : Martie 04, 2008, 22:40:37
Bun test!  Thumb up
18  Comunitate - feedback, proiecte si distractie / Imbunatatire teste / Răspuns: 042 Xor Max : Martie 04, 2008, 17:41:14
Tot nu sunt bune testele: http://infoarena.ro/job_detail/148650
Oricum, eu nu imi dau seama cum sa faci sa pice varianta asta.
19  infoarena - concursuri, probleme, evaluator, articole / Arhiva de probleme / Răspuns: 184 Mult : Ianuarie 17, 2008, 14:12:34
Cum ai facut initial ar fi trebuit sa merga de 100 de puncte lejer. Trebuie sa te uiti si la warning-uri:  Smile
"user.fpc(15,9) Warning: Mixing signed expressions and longwords gives a 64bit result"

Atunci cand amesteci intregi cu semn si intregi fara semn ai nevoie de intregi cu semn pe 64 de biti ca sa retii rezultatul, daca iti poate da overflow, si cateodata compilatorul iti face asta automat.

Incearca sa lucrezi numai cu longword si cred ca nu ar trebui sa ai problema asta.
20  infoarena - concursuri, probleme, evaluator, articole / Arhiva de probleme / Răspuns: 528 Trompeta : Septembrie 25, 2007, 13:35:35
http://infoarena.ro/job_detail/86753

E cam greu de luat 100 in pascal datorita operatiilor IO. La C/C++ timpul masurat e cel de CPU pe cand la pascal daca se scrie/citeste mult, timpul masurat de evaluator e aproape de cel real. Cam ce a trebuit facut sa mearga:

Cod:
var i,k,n,m:longint;
a,st:array[1..1 shl 20] of char;
f:file of char;
begin
assign(input,'trompeta.in'); reset(input);
assign(f,'trompeta.out');rewrite(f,1);
readln(n, m);
readln(a);
...
blockwrite(f, st, m);
close(f);
end.

Editat de moderator: sa nu fim prea expliciti
21  infoarena - concursuri, probleme, evaluator, articole / Informatica / Răspuns: Header in C++: cu sau fara ".h" : Iulie 16, 2007, 21:57:48
Citat
Exista diferente de performanta intre cout/cin si printf/scanf?

Da exista. La printf si scanf programul stie deja formatul datelor, cin//cout nu le stie. In concluzei printf/ scanf e mai bun. (am invatat asta din propie experienta)

De fapt cin si cout stiu exact tipul parametrilor pe cand printf si scanf trebuie sa parseze formatul care le este dat la fiecare apel. Citirea cu stream-uri e mult mai inceata in practica, ea avand totusi un potential mai mare decat citirea clasica din C.
22  Comunitate - feedback, proiecte si distractie / Feedback infoarena / Răspuns: Bug reports : Aprilie 28, 2007, 20:08:30
Cred ca modul in care este calculat by default wall time limit ar trebui modificat un pic.
In loc de TL+1 sec ar fi mai potrivit ceva gen TL*1.2 + 1 sec.
In pascal, de exemplu, se ia cam usor Wall time limit exceeded pe probleme cu input extrem de mare, ca Drept 2.
Drept 2 nici nu cred ca se poate face in pascal fara sa dea Wall TLE pe cateva teste, daca nu se parseaza cumva citirea.
23  infoarena - concursuri, probleme, evaluator, articole / Arhiva de probleme / Răspuns: 008 Cifra : Aprilie 25, 2007, 13:07:47
Cel mai probabil ti se trage de la instructiune "readln(fi,n);"
Cand numarul e prea mare sa intre in int64, crapa programul.
Citeste numarul ca un string si calculeaza numarul mod 100 folosind doar ultimele doua cifre ale lui.
Si poti sa consideri n mod 20 = (n mod 100) mod 20.
24  infoarena - concursuri, probleme, evaluator, articole / Arhiva de probleme / Răspuns: 265 Sah : Martie 28, 2007, 19:35:27
http://infoarena.ro/job_detail/42076

Se poate dar cam scarbos un pic. Ar trebui marita limita la 0.4 secunde, sa mearga si o afisare normala in pascal.
25  infoarena - concursuri, probleme, evaluator, articole / Arhiva de probleme / Re: 236 Biscuiti : Aprilie 13, 2006, 16:52:29
Rezultatul sigur incape dar ai grija ca trebuie sa retii si lungimile individuale ale scandurilor pe 64 de biti.
Pagini: [1] 2 3 4
Powered by SMF 1.1.19 | SMF © 2006-2013, Simple Machines