Minutes of Weekly Meeting, 2008-01-07


Brad Van Treuren
Heiko Ehrenberg
Ian McIntosh
Peter Horwood
Adam Ley
Carl Walker
Guoqing Li
Jim Webster

Email Proxy on the Structural Test Use Case Discussion:
Carl Nielsen
Anthony Sparks
Guoqing Li

Meeting was called to order at 8:22am EST

1. Roll Call (See list above)

2. Review of meeting minutes for 12/17/2007

Approved as is (moved by Ian, second by Heiko)

3. Review old action items

  • Adam proposed we cover the following at the next meeting:
    • Establish consensus on goals and constraints
    • What are we trying to achieve?
    • What restrictions are we faced with?
  • Establish whether TRST needs to be addressed as requirements in the ATCA specification if it is not going to be managed globally (All)
  • Provide feedback of more use cases not yet identified to Brad (All)
  • Review tables (Goals vs. use case matrix) on slides 38-41; (All)
  • Register on new SJTAG web site (http://www.mcintoshuk.plus.com/sjtag/) (All)
  • All need to check and add any missing Doc's to the site (All)
  • Respond to Brad and Ian with suggestions for static web site structure (Brad suggests we model the site after an existing IEEE web site to ease migration of tooling later) (All)
  • Look at proposed scope and purpose from ITC 2006 presentation (attached slides) and propose scope and purpose for ATCA activity group (All)
  • Look at use cases and capture alternatives used to perform similar functions to better capture value add for SJTAG (All)
  • Contact Guoqing regarding alternate meeting time process (Brad) [Done]
  • Set up Use Case categories for forum discussions (Ian) [Done]
  • Volunteers needed for Use Case Forum ownership (All) [Only Fault Injection, Structural Test and POST covered]
  • Send Ian list of volunteers for Use Case champions (Brad) [FI-Ian, ST-Heiko, POST-Brad so far. Need more!]
  • Continue Fault Injection/Insertion discussion on SJTAG Forum page (All)

4. Overview of SJTAG email reflector (Carl Walker)

  • Established as a Cisco external reflector
  • mail may be sent from any domain
  • test message has been sent the morning of Jan 7, 2008 to all past SJTAG meeting participants
  • [Editorial note: The email address of the reflector has been removed from the minutes for security, but can be found in the private section of the forums under "Working Group Tools"]
  • Carl will distribute the list of members of this reflector and requests updates if needed.

5. Discussion Topics

  1. SJTAG Value Proposition - Structural Testing [We may need more than one meeting on this subject!]
    • Brad: Since Heiko is the champion of this use case, I turn the meeting over to him for the discussion.
    • Heiko:
      • "Structural Test" is arguably the most common and most mature use of the JTAG interface;
      • "Structural Test" refers to the verification of interconnects between digital I/O pins (in the case of IEEE 1149.1) and/or analog or mixed-signal I/O pins (in the case of IEEE 1149.4) on a printed circuit board (PCB) or within a system;
      • purpose: detection and diagnosis of DC-type faults, such as stuck-at-0/1, shorted signals, or open pins;
      • ATPG tools are widely available for the generation of test vectors (between Boundary Scan I/O and even including non-Boundary Scan devices);
      • possibility to use Boundary Scan access for mixed-signal cluster tests (e.g. DAC/ADC cluster test);
      • dynamic test: IEEE 1149.6 (verification (dynamic) of AC-coupled/differential networks)
      • major benefits of using IEEE 1149.x / Boundary Scan for structural test:
      • elimination of the need of external test access (beyond the JTAG test bus signals required to transmit the test pattern) (allows the application of structural tests for boards/modules plugged in to a system chassis and even the verification of connections between system modules, provided a respective test bus infrastructure is implemented; also enables the execution of structural test during HALT/HASS testing).
      • System level test/testability
        • board to board test: DFT ...
        • board-only: ensure no contention
      • I am a tool vendor and not a direct user. Usually, our customers identify issues for testing that our tools need to provide.
    • Brad: constraints become important in a system (board edge connector may bring signals to the system backplane that need to be constrained);
      • board level manufacturing tests don't have to be concerned with signals propagated to the edge
      • system level, signals propagated to edge can result in hazards if not constrained
    • AS - I agree, one of the first DFT recommendations for cards to be used in a system should be the ability to place drivers going off-board in high-z, allowing for individual board test without disturbing the rest of the system or causing destructive contentions.
      However, ideally you will be able to perform board-to-board tests while in-system. Understandably, this can be difficult due to cards from multiple vendors and variable configurations/combinations of cards.
    • Ian: signals at the edge can actually provide a benefit as well!
      • they may provide necessary stimulus required for the test
    • Ian: the system may provide resources that are beneficial for board level test;
    • Jim: system clock can sometimes be problematic, if it cannot be "turned off" as part of the test;
      • system clocks tend to be real-time speed in nature
      • typically we disable clocks at manufacturing board test but might not get away with that at the system level (real-time operating mode)
    • AS- If the ability to disable clocks is available for manufacturing board-level test, then often this ability can be reused at system test. Not being intimately familiar with ATCA, is the system clock typically buffered or conditioned on the card with some type of clock distribution device? If so, another DFT recommendation to provide means to disable this device at the card level should be defined.
    • Jim: Speed of clocks could cause you problems at the system level.
      • Could miss SAMPLE cases when monitoring signals.
    • Heiko: Do we count SAMPLE as structural test or as another use case?
    • Ian: For structural test, system clocks must not be a factor.
    • Jim: Then we need to make sure that requirements are designed into the system design to support this.
    • Brad: I am hearing that this is a need as a DFT Guideline required for system clocks.
    • Heiko: Also, DFT for memories is required for system clocks.
      • memory cluster test is also impacted by clock control;
    • Brad: Are these different DFT guidelines than what are required for manufacturing test?
      [No response]
    • AS- The ability to control clocks for memory test should also be carried over from board test. It seems the DFT rules for card level will resolve many of the concerns at system level. The task is ensuring proper card level DFT.
    • Jim: Boundary Scan based memory test may not be a good thing in a system environment;
      • At the system level there are fault logs, configurations, and history that boundary-scan must not touch.
      • We need to be cautious about which areas of memory get tested
    • AS- Perhaps a matter of semantics, when I hear memory test I first think of volatile memory such as SRAM, SDRAM, etc....
    • Jim: Generally, manufacturing test uses deprogrammed devices for testing whereas at system, programmed FPGAs are required.
      • In the system, FPGA changes in the field is also performed.
      • How do you know the right version that must be reprogrammed into an FPGA if you erase it in the system for performing testing?
    • Jim: FPGA reconfiguration at system level may also require updated BDSL file and updated structural test vectors
    • AS- This poses an interesting challenge. It is becoming increasingly common that FPGA's must be used post-configuration due to logic levels and signaling techniques. If there is a change in FPGA code, there is the possibility (although not likely) that the boundary-register will change (output cells may become internal, etc...) and new test vectors will be required. In this case, remote access is necessary to load the new vectors in local memory. That being said, this is not a likely scenario. If there is a change in the FPGA load, it will not likely change the behavior of the I/O.
    • Brad: FPGAs are volatile devices that need to be programmed on power-up or at least during board reset operations.
    • Brad: Since a boundary-scan structural test hoses the internal core logic state of most devices, a board must be reset before going back into service following a structural test application.
    • Brad: We perform a reset operation on the board to ensure sanity following boundary-scan testing. This action forces the latest version of the FPGA programming to be repumped into the FPGA whether it was erased for testing or not. This is standard recovery practice for structural testing at the system level.
    • Heiko: What are the biggest benefits you see in structural test at system-level?
    • AS- Performing the same structural test at the repair depot as was executed in a system helps alleviate the No Fault Found syndrome commonly encountered.
    • Brad: deterministic testing; we know what the test coverage is; functional test is not as deterministic in defining test coverage;
    • Brad: ATPG keeps development cost low; but you'll also need some functional test, because Boundary Scan is not an at-speed test
    • Heiko: does ATCA have many I/O signals go between cards (slots)?
    • Brad: most of those signals are SERDES type components, few with 1149.6 so far
    • AS- I envision 1149.6 becoming much more prevalent in the near future. Thus my concern of not having a multi-drop architecture or the ability to control the scan chains of multiple boards at the same time. To use 1149.6 to test signals between cards, the ability to simultaneously Update and Capture the 1149.6 devices on the involved cards is required.
    • Brad: Concerning Jim's point (re: FPGA programming requirements), FPGA interfaces HAVE to be programmed to enable structural testing in many of the newer designs because the FPGA must be configured via the programming in order to match signal levels and terminations required for the designed circuit operation (voltage levels, drift, pre-emphasis, etc.)
    • Guoqing: sometimes use other methods than Boundary Scan to configure FPGAs;
      • Many times the programming resides in system FLASH memory instead of configuration PROMS
    • Brad: This is especially true for the newer FPGAs that are so large that multiple configuration PROMS are required to store the image.
    • Brad: Many of the configuration PROM architectures do not support multiple images that provide for version roll-back as well if a new image is found to have a problem after it is installed. So system FLASH becomes the preferred route in those cases.
    • Guoqing: since ATCA is an open standard, the same card can have different functions and the same card can be available from different vendors;
    • Brad: In response to Heiko's question, boundary-scan tests are automatically generated for the most part and functional tests are manually generated. However, boundary-scan testing is unable to fully replace functional testing since boundary-scan testing is not an at-speed test.
    • Brad: Guoqing's comment raises another issue: we don't know the run-time system configuration at test development time; makes test development more complicated and may even prohibit system level connectivity test; test/vector management becomes really important; I gave a paper at ITC 2005 on the subject of vector management for system test to point out this important issue.
    • Brad: What complicates things more is the use of AMCs in designs where the carrier board is the same and the I/O modules to the outside world are different or the programming of the modules are to support different filtering operations in processing the messages passing through the board. Thus, the same core design may be used in several different ways and specialized to a specific application by its AMC modules at the I/O points.
    • Ian: We have similar problems with our systems. We have reusable designs with different programming targed for different systems. 90% of the system is the same. 10% of the system is different at the I/O point depending on what type of aircraft it is installed in. Each system runs very different software as well on the same hardware. In fact, part numbers for the boards are different because different customers required different numbers for contractual reasons as well as differences in the software installed on the board.
    • Brad: This is again a test management problem.
    • Ian: We have a cross reference matrix for what test is for what board.
    • Guoqing: I would like to see more of an ATCA type of "Plug-n-Play"
      • Identify board
      • Identify board supplier
      • Locate vectors
      • Where and how do we store test results?
      • What is the level of diagnostics required?
      • Vendors don't want testability/diagnosibility in multi-vendor environment; Boards may be designed by multiple vendors. This is especially true with the use of AMC boards.
    • Guoquing: where and how do we store test results? running the test is one thing, diagnostics is more complicated, especially if you have modules from different vendors and they don't give you all the details you'd need for fault location / diagnostics;
    • AS- The collection of the test results and diagnosing a failure is an achievable task. The raw test results can be retrieved and formatted for a boundary-scan vendor's diagnostic S/W. The problem is, the test vectors are also required to perform the diagnostics. This could be problematic if individual board-level tests were developed with different boundary-scan tool vendor's tools.
    • Brad: We are out of time and need to continue this discussion next week.
    • Brad: For the critical players that were not present in the call today, they have indicated they will post messages via proxy to be added to the meeting minutes. These include Anthony Sparks and Carl Nielson. Guoquing indicated he would like to respond via email with possible additional comments since his written English is better than his spoken English and we all agreed to let him converse using the best means possible for him. These will be included with the meeting minute notes.
    • By Proxy:
    • Guoqing:
      1. 'System structural test' concept should be a more important discussion focus. It is different from board level test people are very familiar with. After all extra components are brought into system HW design and everyone would be concerned with ROI. I feel conventional technology and application issues at board level test may be not main topics.
      2. SJTAG architecture discussing might be more important than discussing details. Structural test or sampling is only a use case and it depend on architecture. I'm not sure whether we are mainly discussing ATCA based SJTAG till now or general SJTAG. ATCA is a open system and not like other private system, its SJTAG model, hardware or software, is relatively easier abstracted than others even though it has its own restriction. If we had known the model, it would be more clear after we discuss the various use case.
    • Carl Nielson:
      • FPGAs can play a significant roll in system level structural test, beyond the basic capabilities of the boundary register or potential 1149.x structures. Custom IP and designs targeted for testing specific connections on a board or system (at-speed memory test, at-speed SERDES test) can be loaded into the FPGA(s) during test time and test results retrieved, over the same 1149.1 bus.
      • Fault coverage for tests can potentially be quantified and added to the overall fault coverage analysis/report for boundary scan based structural test versus functional test.
      • Agreed, that once a boundary scan based structural test is perform in-system, the test management system needs to "reset" the functional system to a known state. The two need to work hand-in-hand in that respect.
      • FLASH memory seems like a good choice for storing failure information found during in-system or embedded structural test, to be extracted later for diagnostics. FLASH densities are getting larger and the cost is going down. Many systems share flash for different purposes (partitioned). Need a non-volatile place for storage, as to not lose the information gathered at that time, as reproduction of the condition under which a failure occurred may be difficult or not possible.
    • Anthony Sparks:
      • Embedded in the transcript of the meeting using AS- as the key.

6. Next meeting scheduled for 1/14/2007 at 8:15am EST

  • Adam unable to attend 1/14 meeting
  • Heiko was unable to attend a 1/16 alternate meeting time so the meeting was scheduled for 1/14 to accommodate Heiko since he is the champion for the discussion topic. Adam will make follow-up email comments.

7. Review new action items

  • Send additional sub topics to Heiko for the continued Structural Test use case discussion for 1/14/08. (All)
    • Guoqing's list is a very good start
  • We will need to begin writing a white paper for the System JTAG use cases to provide to the ATCA working group (All)
    • Most likely, champions will own their subject section and draft the section with help from others.
    • This paper will be based on the paper Gunnar Carlsson started in 2005.

Meeting adjourned at 9:29am EST

Many thanks to Heiko and Adam for supplying their notes to assist in recording the meeting.

Respectfully submitted,