Sdma in embedded systems pdf




















Vavouras, M. High-speed FPGA-based implementations of a genetic algorithm. In Systems, architectures, modeling, and simulation, International symposium on , , pp. Alansi, M.

Ke-Lin, D. Wireless communication systems: From RF subsystems to 4G enabling technologies. Cambridge: Cambridge University Press. Chakchai, S. Scheduling in IEEE Jubair, G. Vaiapury, K.

Xilinx Vertix 6 documentation. Download references. You can also search for this author in PubMed Google Scholar. Correspondence to Mohammed Alansi. Reprints and Permissions. Wireless Pers Commun 86, — Download citation. Published : 31 July Issue Date : February Anyone you share the following link with will be able to read this content:.

Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative. Skip to main content. Search SpringerLink Search. Abstract Robust multi-user detection MUD methods based on space division multiple access SDMA techniques are essential to efficiently exploit the electromagnetic spectrum.

References 1. Article Google Scholar 2. Google Scholar 3. Article Google Scholar 4. An Access Point wireless device to transmit and receive RF signals using space-time channels in a wireless network, comprising: a processor to process digital signals converted to and from the RF signals;. The wireless device of claim 51 , wherein the transmission of the RF signals are from one or more antennas. The wireless device of claim 51 , wherein the transmission and reception of the RF signals use Spatial-Division Multiple-Access SDMA to allow multiple independent transmissions between the wireless device and the selected mobile stations.

The wireless device of claim 54 , wherein the scheduler is configured to schedule variable length packets for transmission based on transmission times to simultaneously transmit on each of M spatial channels to mobile stations operable in the wireless network by filling the M spatial channels using data packets buffered for all stations, the scheduler being configured to buffer for a number of stations greater than the number M of the spatial channels.

The wireless device of claim 55 , wherein M is a constant greater than zero and less than or equal to the number of antennas at the base station, and wherein the apparatus is configured to send multiple schedules in a protected time interval to the mobile stations. An integrated circuit IC operable in an Access Point wireless device to transmit and receive RF signals in a wireless network using space-time channels to selected mobile stations, the IC comprising: a fragmentor to fill the space-time channels with segmented data packets to be transmitted to the selected mobile stations.

The IC of claim 61 wherein the IC is configured to be operable with a scheduler to schedule data packets that may have differing lengths for transmission to the selected mobile stations and a Radio Frequency RF transceiver to receive and transmit the RF signals using the space-time channels.

The IC of claim 62 , wherein the transmission and reception of the RF signals use Spatial-Division Multiple-Access SDMA to allow multiple independent transmissions between the wireless device and the selected mobile stations. The IC of claim 64 , wherein the scheduler is configured to schedule variable length packets for transmission based on transmission times to simultaneously transmit on each of M spatial channels to mobile stations operable in the wireless network by filling the M spatial channels using data packets buffered for all stations, the scheduler being configured to buffer for a number of stations greater than the number M of the spatial channels.

The IC of claim 65 , wherein M is a constant greater than zero and less than or equal to the number of antennas at the base station, and wherein the apparatus is configured to send multiple schedules in a protected time interval to the mobile stations. The method of claim 67 , further comprising transmitting the RF signals from one or more antennas. The method of claim 68 , further comprising using Spatial-Division Multiple-Access SDMA to allow multiple independent transmissions between the wireless device and the selected mobile stations.

The method of claim 70 , further comprising configuring a scheduler to schedule variable length packets for transmission based on transmission times to simultaneously transmit on each of M spatial channels to mobile stations operable in the wireless network by filling the M spatial channels using data packets buffered for all stations, the scheduler being configured to buffer for a number of stations greater than the number M of the spatial channels.

The method of claim 71 , wherein M is a constant greater than zero and less than or equal to the number of antennas at the wireless device, and wherein the apparatus is configured to send multiple schedules in a protected time interval to the mobile stations. An apparatus, comprising: logic, at least a portion of which is in hardware, the logic to fill the space-time channels with fragmented data packets to be transmitted to selected mobile stations.

The at least one non-transitory computer-readable storage medium of claim 75 , wherein the space-time channels are completely filled with fragmented data packets. USA1 en. EPB1 en. CNB en. WOA2 en. JPB2 en. Wireless communication device, wireless communication device control method, and wireless communication device control program.

USB2 en. Device, system and method of simultaneously communicating with a group of wireless communication units. Device, system and method of scheduling communications with a group of wireless communication units. Method for arranging communication between terminals and an access point in a communication system.

Apparatus, and method for communicating packet data in a sdma space-division, multiple-access communication scheme. Digital transmission system with enhanced data multiplexing in VSB transmission system. Method for access control in a multiple access system for communications networks. Wireless communication system, weighting control arrangement, and weight vector generation method. System and method for sharing bandwidth between co-located Method for multiplexing two data flows on a radio communication channel and associated transmitter.

Method and apparatus for allocating downlink resources in a multiple-input multiple-output MIMO communication system. Channel assignments in a wireless communication system having spatial channels including enhancements in anticipation of new subscriber requests.

Channel assignments in a wireless communication system having spatial channels including grouping existing subscribers in anticipation of a new subscriber. CNA en. EPA2 en. Systems and methods for adaptive bit loading in a multiple antenna orthogonal frequency division multiplexed communication system.

KRB1 en. Apparatus and method for scheduling resource in a radio communication system using multi-user multiple input multiple output scheme. Incremental redundancy transmission for multiple parallel channels in a MIMO communication system. Method, apparatus and system of multiple-input-multiple-output wireless communication.

WOA3 en. Communication apparatus, communication method, and communication system for handling frames of variable length. Communication apparatus, communication method, computer program, and communication system. Data Memory Methodology the number of search locations that are used in determining the The first step of our data memory methodology consists of three motion vectors. The second step is to map the transformed algorithm to the physical memories.

Employing data reuse transformations [2], we determine the certain data sets, which are heavily re-used in a short period of time. The re-used data can be stored in smaller on-chip memo- ries, which require less power per access.

In this way, redundant accesses from large off-chip memories are transferred on chip, re- ducing power consumption related to data transfers. Of course, data reuse exploration has to decide which data sets are appropri- ate to be placed in separate memory.

Otherwise, we will need a lot of different memories for each data set resulting into a significant area penalty. Here, we applied 21 data-reuse transformations [7] to all target architecture models for the three ME kernels.

Another type of transformations applied was the performance optimizations, like common sub-expression elimination. Of course this kind of transformation has an impact in the instruction power budget.

The tradeoff in this case was between the increase in the instructions due to the extra assignments in one hand, and the de- crease in the instructions due to sub-expression elimination on the other hand. Sub-expressions are useful to eliminate when they Fig. Target Architecture Model have to be executed in a great number of loops. When the number of loops is small the overhead produced by the assignment retracts the benefits of the elimination.

The third type of transformations are the instruction level estimate the instruction power without the use of cache and the transformations, which are processor dependent. Indeed, a pro- optimized data memory power consumption. Then, we perform gram written in high level language, eg. C, can be re-written sub- instruction power optimization for the data re-use transformations.

For Here we provide the results corresponding to the minimum and example we have found that the multiply operation in the ARM maximum instruction power consumption. The derived results for processor could be substituted with summation operations.

For all the data- For each target architecture we perform three pairs of measure- memory architectures models a shared background probably off- ments with and without cache memory, that is, i original ker- chip memory module is assumed. Thus, in all cases special care nel, ii transformed kernel using appropriate data-use transfor- must be taken during the scheduling of accesses to this memory, mation that corresponds to MIN instruction power consumption, to avoid violating data-dependencies and to keep the number of and iii transformed kernel using appropriate data-use transfor- memory ports as small as possible in order to keep the power mation that corresponds to MAX instruction power consumption.

In this way all mem- ory affect significantly the instruction memory power consump- ories modules of the memory hierarchy are single ported, but also tion. More specifically, FS exhibits the most computational com- area overhead is possible in cases of large amount of common data plexity is dominated by instruction power Fig.

The corre- to be processed by the N processors. The second data-memory sponding cache analysis proves that the power savings almost for architecture-model i. The re- memory levels for the N processors. Since, in the data-dominated maining two ME kernels have similar instruction power savings. Fi- larger the instruction power savings, for instance FS kernel. On the nally, SDMA is a combination of the above two models, where the other hand, HS and PHODS kernels have similar complexity and common data to the N processors are placed in a shared memory their corresponding power analysis shows similar power consump- hierarchy, while a separate data memory hierarchy also exist for tion and eventually, similar cache optimization results.

These optimal 3. Instruction Cache Methodology values came after power exploration of cache memory. The final conclusions after the exhaustive exploration both in In this point of the methodology, we have reached to some in data and instruction memory are: i for FS kernel the most power our case 21 transformed versions of our algorithm.

Having this in mind we make measurements in order to power efficient is the SDMA model. Consequently, the power op- evaluate the performance of the algorithms. In this way we cre- timized solutions depended on the chosen application and the as- ate a pool of possible solutions for further study. From this pool sumed target architecture model. This is a way to reduce the huge search space and help us reach a near-optimal solution more quickly.

After having selected some of the candidate algorithms we then run a simulation again to attain their exact instruction trace. This trace is then fed to the DineroIV cache simulator [10] for various cache parameters.



0コメント

  • 1000 / 1000