|
[讨论]关于密码小组考核出题问题
Hello~ 帥哥們~安安~ lingyu 版主辛苦的完成了試題及答案,我初部看過,符合之前他所說的分門別類的題型。在此,非常感謝 lingyu 版主及大家的辛勞。 至於小组测试试题之答案,請容我後面仔細驗算之後再補上。 現在是要怎麼來進行這個 testing? Online testing 還是 Offline testing? 怎麼將舞弊的機率降到最低? 請大家參考並提出意見? (修正了23题的小错误 by lingyu) |
|
高分求论文资料,有篇论文即散分
(1) Watermarking, Tamper-Proofing, and Obfuscation-Tools for Software Protection.rar (1.11 MB) (2) K. Holmes, Computer Software Protection, US Patent 5,287,407.pdf (182.9 KB) (3) 及 (4) 請逛逛密碼版看看,應該會有。 |
|
[分享]具偵錯能力的帶符號快速加法電路之設計
建議 deryope 去翻一翻書本-- Algorithm, 有一個 chapter 是介紹 distribution sysyem 或是 parallel 的部份。 順便閱讀 Synchronization 及 Assynchronization |
|
|
|
[求助]没有姓名--几篇springerlnk的文章
這篇論文有兩種結果: (請注意它有標示 to appear 字樣) 1) 還沒正式刊登出來,但已經有電子檔在該數據庫。 2) 還沒正式刊登出來,還沒有放出電子檔出來在該數據庫。 所以: 1) 不是等刊登出來再去下載。 就是 2) 直接寫信向作者索取看看,運氣好,或是作者會提供。 |
|
|
|
[推荐]Cryptanalytic attacks on RSA
看樣子不是中國的所有大學院校都有買這家的數據庫。 |
|
[推荐]Cryptanalytic attacks on RSA
謝謝 没有姓名 痍熱心提供,我把檔案整合成一個檔案提供下載。 |
|
|
|
[讨论]谈密码学杂志
審稿委員就是指審查文章的學者及專家們。 分為初審(撐為第一審查)及複審(又稱為第二審查或是外審),初審是由主編及編輯委員對文章進行審查,若不符文章格式要求、文章內容不符、質量不夠,都可能會被退稿;若通過初審,文章會再送去外部進行審查,若通過外部審查,文章就能刊登了。 「在网络上发表的原创文章」,理論上就算是發表過了,原則上不可以再投稿,除非文章進行適度的修改(正)。 這是對文章的作者一種尊重的態度。 作者投稿需要填實所有資料以示負責,但可以應「作者因保護個人資料為由」,要求文章在刊登時,以筆名等刊登。 參與初審及複審的委員也是要填實個人資料,但在文章審查階段及期刊出版階段,也可以要求匿名或是以筆名顯示之。(指初審的編輯委員 名字會刊登,複審的委員不會刊登其姓名。) 歡迎投稿。 正常情況之下,作者相關資料是必填的。前面敘述已經說明。 |
|
[分享]2010即将到来,祝福看雪!祝福大家!祝福自己!
新年快樂。 Nouvelle année heureuse. Glückliches neues Jahr. Nuovo anno felice. 明けましておめでとう. Feliz Año Nuevo. 새해 복 많이 받으세요. Καλή χρονιά. Ano novo feliz. С новым годом. |
|
[分享]具偵錯能力的帶符號快速加法電路之設計
Parallelization - Parallel Adder The following is a description of a basic single-processor code and its transition into a parallel code. We hope this example of a simple parallelization process serves to demonstrate what steps and calls are needed for codes whose work can be performed independently of one another. A simple "divide and conquer" approach is appropriate here. To the experienced parallel programmer, this example is a "trivially parallelizable" code because the entire work of the code can be partitioned between the processors very simply. That is the point of this discussion: we are demonstrating how one might go about converting this kind of single-processor code into a parallel code. This technique could apply to your situation in at least three possible ways. First, this technique is applicable to to problems such as minmax searches for the best move in a game such as chess, "monte carlo" techniques, or computationally-intensive searches through a large parameter space. If you are fortunate enough to have a code of this category, then you need to look little further than this technique. Second, selected components of a code could also fall under this category, so this technique could be useful to apply to those portions of a single-processor code. Finally, if this is your first look at parallel code, this example serves as an excellent start to learn how to go about writing and thinking about parallel codes. Before: Single-Processor adder We begin with a code utilizing single-processor execution. This example, adder, sums the square of each integer from 1 to 10000, inclusive. It is organized into three routines: 1) kernelroutine - performs the elemental work of squaring each integer 2) computeloop - allocates memory, loops over the integers, calls kernelroutine, and saves and sums their results 3) main - performs any required initialization, calls computeloop, saves its results into a file, printf's the sum, and deallocates memory The code is structured in such a way to clearly separate the pieces that perform the work that is secondary yet essential to solving the problem. These include allocating the appropriate memory and saving data to an output file. Besides identifying where these functions are being performed, the structure makes it obvious that a much more complicated problem can substitute for the kernelroutine shown. We also intend this explicit structure to represent corresponding portions of a much larger code. The C source is shown in Listing 1. Listing 1 - adder.c See also adder.f90 -------------------------------------------------------------------------------- #include <stdio.h> /* standard I/O routines */ #include <stdlib.h> /* standard library routines, such as memory */ #include <math.h> /* math libraries */ /* A routine performing elemental work. This can be replaced with a much larger routine. */ double kernelroutine(double input); double kernelroutine(double input) { return (input+1)*(input+1); } /* HighestIndex specifies how highest index to sample */ #define HighestIndex 10000L void computeloop(double *theSum, double **theArray); void computeloop(double *theSum, double **theArray) { /* local copies of the data to be output */ double *myArray, mySum=0; /* limit of the loop */ long loopEnd=HighestIndex; /* allocate an array to hold all the results */ myArray=malloc(sizeof(double)*loopEnd); if (myArray) { long index; /* loop over indicies */ for(index=0; index<loopEnd; index++) { /* call the kernel routine for each index, and save into the array */ myArray[index]=kernelroutine(index); /* sum as we go */ mySum+=myArray[index]; } } /* return the sum and the array */ *theSum=mySum; *theArray=myArray; } int main(int argc, char *argv[]) { /* main copies of the sum and the array */ double theSum, *theArray=NULL; printf("Beginning computation...\n"); computeloop(&theSum, &theArray); if (theArray) {/* error checking */ FILE *fp; /* save the array into a data file */ fp=fopen("output", "w"); if (fp) { printf("writing array...\n"); fwrite(theArray, sizeof(double), HighestIndex, fp); fclose(fp); } printf("the sum is %f\n", theSum); /* clean up after myself */ free(theArray); } else printf("memory allocation failure\n"); return 0; } When this code is run with its HighestIndex constant set to 10000, the code states that the sum is 333383335000 and produces a 80000 character long binary file containing the squares of integers 1 through 10000. After: paralleladder A key to parallelizing an application is choosing the appropriate partitioning of the problem. The obvious choice here is to divide the number of integers by the number of processors. However, a few details must be worked out: What if the number of processors doesn't divide evenly into the number of integers? How does each processor know which integers to work on so they don't do the same integer twice or miss one? Once they have the answers, how do the sum numbers on one processor with those on another, and how do these processors save the data into one file? The parallelization process here is primarily a matter of managing and delegating data and computation. Changes to main are limited to "boilerplate" code that prepare and tear down the parallel environment and pass the sufficient information to computeloop so that it can organize the work and a if test so only processor 0 creates an output file. The most important modifications are in computeloop to perform the coordination and calculate the partitioning between processors. Given the identification number of the processor it is running on and the number of processors in the system, computeloop calculates how to partition the problem among the processors. To minimize bottlenecks, the executable running on each processor calculates its assignment itself, without a central authority delegating assignments. Once it finishes its work, it collects the data back to processor 0 so main can write the output in one convenient place for the user. The detailed answer is in the code. Listing 2 shows the C source of paralleladder.c, with the changes relative to adder.c underlined. Listing 2 - paralleladder.c, with changes compared to adder.c underlined See also paralleladder.f90 -------------------------------------------------------------------------------- #include <stdio.h> /* standard I/O routines */ #include <stdlib.h> /* standard library routines, such as memory */ #include <math.h> /* math libraries */ #include "mpi.h" /* MPI library */ /* A routine performing elemental work. This can be replaced with a much larger routine. */ double kernelroutine(double input); double kernelroutine(double input) { return (input+1)*(input+1); } /* HighestIndex specifies how highest index to sample */ #define HighestIndex 10000L void computeloop(double *theSum, double **theArray, int idproc, int nproc); void computeloop(double *theSum, double **theArray, int idproc, int nproc) { /* local copies of the data to be output */ double *myArray, mySum=0; /* limit of the loop */ long loopEnd=(HighestIndex+nproc-1)/nproc; /* just this proc's piece */ /* this processor's index offset */ long offset=idproc*loopEnd; /* allocate an array to hold all the results */ myArray=malloc(sizeof(double)*loopEnd); if (myArray) { long index; /* loop over indicies */ for(index=0; index<loopEnd; index++) { /* call the kernel routine for each index, and save into the array */ myArray[index]=kernelroutine(index+offset); /* sum as we go */ if (index+offset<HighestIndex) /* limit to the desired indicies */ mySum+=myArray[index]; } /* proc 0 needs to hold the entire array */ {double *bigArray=idproc?NULL:malloc(sizeof(double)*loopEnd*nproc); /* gathers the data from the other arrays ... */ MPI_Gather(myArray, loopEnd, MPI_DOUBLE, bigArray, loopEnd, MPI_DOUBLE, /* ... to proc 0 */ 0, MPI_COMM_WORLD); if (!idproc) { free(myArray); myArray=bigArray; } } /* performs a parallel sum across processors and saves the result ... */ MPI_Reduce(&mySum, &mySum, 1, MPI_DOUBLE, MPI_SUM, /* ... at proc 0 */ 0, MPI_COMM_WORLD); } /* return the sum and the array */ *theSum=mySum; *theArray=myArray; } int ppinit(int argc, char *argv[], int *idproc, int *nproc); void ppexit(void); int main(int argc, char *argv[]) { /* main copies of the sum and the array */ double theSum, *theArray=NULL; int idproc, nproc, ierr; /* initialize parallel processing */ ierr = ppinit(argc, argv, &idproc, &nproc); if (ierr) return ierr; /* stop right there if there's a problem */ printf("I'm processor #%d in a %d-processor cluster.\n", idproc, nproc); printf("Beginning computation...\n"); computeloop(&theSum, &theArray, idproc, nproc); if (theArray) {/* error checking */ if (!idproc) {/* only processor 0 */ FILE *fp; /* save the array into a data file */ fp=fopen("output", "w"); if (fp) { printf("writing array...\n"); fwrite(theArray, sizeof(double), HighestIndex, fp); fclose(fp); } printf("the sum is %f\n", theSum); } /* clean up after myself */ free(theArray); } else printf("memory allocation failure\n"); /* only proc 0 pauses for user exit */ if (!idproc) { printf("press return to continue\n"); getc(stdin); } ppexit(); return 0; } #ifdef __MWERKS__ /* only for Metrowerks CodeWarrior */ #include <SIOUX.h> #endif int ppinit(int argc, char *argv[], int *idproc, int *nproc) { /* this subroutine initializes parallel processing idproc = processor id nproc = number of real or virtual processors obtained */ int ierr; *nproc=0; /* initialize the MPI execution environment */ ierr = MPI_Init(&argc, &argv); if (!ierr) { /* determine the rank of the calling process in the communicator */ ierr = MPI_Comm_rank(MPI_COMM_WORLD, idproc); /* determine the size of the group associated with the communicator */ ierr = MPI_Comm_size(MPI_COMM_WORLD, nproc); #ifdef __MWERKS__ /* only for Metrowerks CodeWarrior */ SIOUXSettings.asktosaveonclose=0; SIOUXSettings.autocloseonquit=1; #endif } return ierr; } void ppexit(void) { /* this subroutine terminates parallel processing */ int ierr; /* terminate MPI execution environment */ ierr = MPI_Finalize(); } The changes from adder to paralleladder: mpi.h - the header file for the MPI library, required to access information about the parallel system and perform communication idproc, nproc - nproc describes how many processors are currently running this job and idproc identifies the designation, labeled from 0 to nproc - 1, of this processor. This information is sufficient to identify exactly which part of the problem this instance of the executable should work on. In this case, these variables are supplied to computeloop by main. loopEnd=(HighestIndex+nproc-1)/nproc; - loopEnd describes how many integers this processor should perform kernelroutine as it was in adder. Here, we choose to make have each processor perform the same amount of work, yet the total amount of work is at least that necessary to complete the problem. Depending on HighestIndex and nproc, the last processor might do a little too much, but that processor would end up waiting for the others to catch up if it did skip the excess work and it's only a small waste if HighestIndex is sufficiently large. offset=idproc*loopEnd; - By shifting the start of the sampling for this particular processor, offset is how each processor knows not to overlap work with other processors, without skipping the needed work. With this variable and loopEnd, this is sufficient information to specify the partition of work assigned to this processor. the malloc - performs the memory allocation for each processor. Except for processor 0, the memory allocation is the amount sufficient to hold the results of this processor's partition of work. In the case of processor 0, it must create an array big enough hold the results of its own work and that of every other processor to save the data later. if (index+offset<HighestIndex) - limit the summation to only the indicies we want in the sum MPI_COMM_WORLD - MPI defines communicator worlds or communicators that define a set of processors that can communicate with each other. At initialization, one communicator, MPI_COMM_WORLD, covers all the processors in the system. Other MPI calls can define arbitrary subsets of MPI_COMM_WORLD, making it possible to confine a code to a particular processor subset just by passing it the appropriate communicator. In simpler cases such as this, using MPI_COMM_WORLD is sufficient. MPI_Gather - fills bigArray of processor 0 with the data in myArray from the other processors in preparation for saving into one data file. myArray, loopEnd, and MPI_DOUBLE specify the first element, size, and data type of the array. 0 specifies where the data is to be collected. This call can be considered a "convenience" routine that simplifies the process of "gathering" regularly organized data from other processors. MPI_Reduce - computes the sum, specified by MPI_SUM, of each mySum variable of each processor and collects the result in processor 0. This call can sum an entire array of values, so mySum is passed as an array of length one. When supplied MPI_MAX or MPI_MIN, it can also perform other operations such as comparing values across processors and retaining the maximum or minimum values. ppinit - performs initialization of the parallel computing environment. Part of the boilerplate for MPI codes, this routine calls: MPI_Init - performs the actual initialization of MPI. It returns the error code; zero means no error. MPI_Comm_size - accesses the processor count of the parallel system MPI_Comm_rank - accesses the identification number of this particular processor ppexit - by calling MPI_Finalize, terminates the parallel computing environment. Also part of the boilerplate of MPI codes, a call to MPI_Finalize is needed to properly cleanup and close the connections between codes running on other processors and release control. __MWERKS__ - these #ifdef blocks merely compensate for a pecularity about the Metrowerks CodeWarrior compiler. These blocks allows the executable on other processors terminate automatically. If you are using another compiler, you don't have to worry about this code. Note that kernelroutine did not need to be altered at all. That reflects the independence of the work performed by this routine. The code that organizes and coordinates the work between processors is in computeloop. main's changes merely perform the boilerplate work of setting up and tearing down the parallel environment, plus an if statement so only processor 0 writes the output file. Something to note is that this code does not explicitly call MPI_Send or MPI_Recv or any other elemental communication calls. Instead it calls MPI_Gather and MPI_Reduce. There isn't anything wrong with using elemental versus collective calls, if either will do the job. In fact, the MPI_Gather could just as easily be replaced with the following: Listing 3 - an alternative to the above MPI_Gather call -------------------------------------------------------------------------------- if (idproc) MPI_Send(myArray, loopEnd, MPI_DOUBLE, 0, idproc, MPI_COMM_WORLD); else { long otherProc; MPI_Status status; for(index=0; index<loopEnd; index++) bigArray[index]=myArray[index]; for(otherProc=1; otherProc<nproc; otherProc++) MPI_Recv(&bigArray[otherProc*loopEnd], loopEnd, MPI_DOUBLE, otherProc, otherProc, MPI_COMM_WORLD, &status); } References Using MPI by William Gropp, Ewing Lusk and Anthony Skjellum is a good reference for the MPI library. In addition, many of the techniques and style of the above parallel code were adopted from Dr. Viktor K. Decyk. He expresses these ideas in a few references, including: V. K. Decyk, "How to Write (Nearly) Portable Fortran Programs for Parallel Computers", Computers In Physics, 7, p. 418 (1993). V. K. Decyk, "Skeleton PIC Codes for Parallel Computers", Computer Physics Communications, 87, p. 87 (1995). You're welcome to read more about parallel computing via the Tutorials page. sample Code: adder.c adder.f90 paralleladder.c paralleladder.f90 Source from http://daugerresearch.com/vault/paralleladder.shtml |
|
[讨论]关于密码小组考核出题问题
哇~~哈~~哈~~哈~~ 沒關係~~慢慢來~~不急~~ 有空想請教 arab 大哥,關於(在中國大陸)申請專利的事。 我想把 magic square + strea cipher 的構想申請專利。 |
|
|
|
[分享]植基於RSA加密演算法頻率特性之研究
差不多是這意思。 這是一個 face discrete logarithm problem. 所以當 p 是 large prime number 時,就很困難。 |
操作理由
RANk
{{ user_info.golds == '' ? 0 : user_info.golds }}
雪币
{{ experience }}
课程经验
{{ score }}
学习收益
{{study_duration_fmt}}
学习时长
基本信息
荣誉称号:
{{ honorary_title }}
能力排名:
No.{{ rank_num }}
等 级:
LV{{ rank_lv-100 }}
活跃值:
在线值:
浏览人数:{{ visits }}
最近活跃:{{ last_active_time }}
注册时间:{{ user_info.create_date_jsonfmt }}
勋章
兑换勋章
证书
证书查询 >
能力值