• Home
  • Tutorials
  • Ten Daily Electronic Common Sense-Section 121

    Briefly describe the four major trends in the next generation of programmable logic device hardware?

    The four major development trends of next-generation programmable logic device hardware can be summarized as follows: the most advanced ASIC production technology will be more widely used in programmable logic devices represented by FPGA.
    More and more high-end FPGA products will contain processor cores such as DSP or CPU, so that FPGA will gradually transition from traditional hardware design methods to system-level design platforms.
    FPGA will include more and more functional hard cores (Hard IP Core), further integrate with traditional ASICs, and accelerate the occupation of part of the ASIC market through structured ASIC technology. The density of low-cost FPGA is getting higher and higher, and the price is more and more reasonable, which will become the backbone of FPGA development.
    These four development trends can be referred to as advanced technology, processor core, hard core and structured ASIC, and low-cost devices.

    Why does the piezoelectric component application have to accompany the diode?

    Piezoelectric elements have piezoelectric properties, and a voltage is generated when it undergoes mechanical deformation and recovers from the deformation. Therefore, when the piezoelectric element is subjected to mechanical shock/deformation, a voltage is generated across its two ends.
    Remember that piezo elements or assemblies containing piezo elements should be properly secured or shrouded to keep it free from mechanical shock.
    There shall also be electrical protection in the circuit against any voltage appearing across it. Because this voltage may be added to the circuit, or it may also provide a back electromotive force to the circuit.
    Therefore, two diodes should be provided in conjunction with the piezoelectric element to prevent summation of voltages and avoid back EMF.

    What are the conditions for implementing direct data exchange communication?

    1. The slave station sends data to the master station.
    2. The slave station as a consumer must be an intelligent slave station with a CPU.
    3. Sites participating in direct data exchange must have direct data exchange communication functions.

    What is Quartus II?

    Quartus Ⅱ is a comprehensive PLD development software of Altera Company, which supports various design input forms such as schematic diagram, VHDL, Verilog HDL and AHDL (Altera Hardware Description Language).
    The built-in synthesizer and simulator can complete the complete PLD design process from design input to hardware configuration. There are many versions of Quartus Ⅱ now. Mainly use the 9.0 free version.

    What is the relationship between CPSR and SPSR?

    When a specific exception interrupt occurs, save the current value of CPSR to the SPSR in the corresponding exception mode, and then set the CPSR to the corresponding exception mode. When returning from an abort routine exit, the CPSR can be restored from the value stored in the SPSR.

    What should a dynamic frame satisfy?

    • There may be no or several dynamic time slices in a communication cycle;
    • The message length of the dynamic part is variable at runtime.

    What are the basic problems of consumer-loving portable consumer electronics such as mobile phones, multimedia players (PMPs), MP3 players, digital cameras, portable video game consoles, personal navigation systems (PNAs)?

    Their functions are becoming more and more abundant, and their shapes are becoming more and more compact.
    However, the rate of increase in battery energy density is far behind the power consumption requirements of ever-increasingly complex portable devices.
    But people wish to utilize these light and small portable electronic devices under the longer situation of charging time interval. Especially the fusion trend of these product functions. For example, the integration of mobile phones with photo and video functions, personal digital assistants, GPS, MP3 and video functions into a smart mobile phone further exacerbates the seriousness of this problem.

    The difference between non-linear editing and post-synthesis technology:

    The post-synthesis system depends on the computer’s high-speed, large-scale data processing capabilities and storage capacity.
    Since nonlinear editing and post-synthesis technologies are developed on a common electronic platform, the two learn from each other and merge with each other, making many people unable to distinguish the difference between the two.
    In fact, there are many differences between the two in terms of user orientation, technical requirements for hardware, application of electronic technology, and final effects.
    (1) Editing and compositing Editing is closer to editing, it is to cut and rearrange and combine the source material to make A new clip sequence, the source clips include video and audio.
    The most typical editing process is TV series production and news series editing. Non-linear editing systems are mainly used for editing work, such as the famous AvidXpress and DiscreetLogicedit systems, domestic DY-3000 and Creative 21, etc.
    Synthesis, as the name suggests, is to synthesize different layers together, replace each other, and make a new picture. It’s like using Photoshop to modify a photo, except that the picture here is active, which is what we call a video.
    The techniques used by the two are essentially the same: to separate different chromaticity and brightness through key signals, and to establish various types of channels.
    The function of the channel is more like a layer of transparent glass, so that the two layers of pictures look like one. The most representative synthetic products are Infinity, editbox and Hal developed by Quantel. But they are too expensive to run on minicomputers. There are some software in the low-end market such as: DigitalFusion, Aftereffect relies on the hardware acceleration of the nonlinear editing board to complete the synthesis function. Although it is not as efficient as high-end products, it is more widely used because of its lower price and a large number of plug-in supports. Some of these plugins, such as: Ultimatte, 5D, etc., are also used by high-end equipment.
    (2) Special effects and special effects Special effects often refer to the processing of the size, rotation, folding, editing, and filtering of video signals, such as wipes and picture-in-pictures used in serial editing.
    Many special effects originate from trick switching in conventional equipment. Special effects refer to special effects, that is, artificial simulation of natural properties. It is more about processing the environment where the video is generated, such as the position of the camera, the condition of the weather, the strength of the light source, etc. Such as anti-tracking and anti-operation commonly used in special effects, in order to obtain the original variables and parameters.
    Obviously, special effects are much more complicated than stunts, and are more dependent on computer computing power. Therefore, general non-linear devices can run on ordinary PC compatible machines, while special effects-based composite products mostly run on SGI platforms or high-line graphics workstations. Due to the intervention of electronic computers, the traditional technological process has changed into steps such as shooting-synthesis processing-post-processing.
    At present, there are many products that include many features of the last two steps. For example, Jaleo, a Spanish product mainly based on non-linear editing, runs on the SGI platform. It is more integrated into the synthesis technology, and Quantel’s editbox and Hal also have certain editing functions.
    Special effects are developing towards special effects, and non-linear editing and synthesis are integrating with each other. It is computer technology that makes their differences smaller and smaller. However, synthesis equipment cannot be replaced by ordinary non-linear editing equipment after all, and editing products should be developed in a more popular, simpler, and lower-cost direction. Synthetic products require users to be more professional, and they are not mainly aimed at those long-term or simple programs, but high-quality movies and TV shows of a few seconds or more than ten seconds. Such as title packaging, advertising production and so on. It can be said that the characteristics of the non-linear editing system are editing and editing and wiping, and the characteristics of the video synthesis system are keying superposition and special effect processing.

    In the wake-up request, the transceiver asserts the pin INH output to activate the external supply voltage regulator.What events can I set pin INH to high?

    1. Power on VBAT (cold start).
    2. The rising or falling edge of pin WAKE.
    3. Pin EN or pin STB is low and the message frame has a dominant phase of at least the maximum value of tCANH or tCANL.
    4. When VCC is activated, pin STB enters high level. If VCC is used, the wakeup request can be read at the ERR or RxD output. The external microcontroller can then activate the transceiver (switch to normal operating mode) via pins STB and EN.

    What are the main technical forms of MEMS?

    • Sensing MEMS technology
    • Biological MEMS technology
    • Optical MEMS technology
    • RF MEMS technology


    DISQUS: 0