May 14 18:10:19.925463 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:10:19.925491 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:10:19.925500 kernel: BIOS-provided physical RAM map: May 14 18:10:19.925507 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 18:10:19.925514 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 14 18:10:19.925521 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 14 18:10:19.925533 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved May 14 18:10:19.925540 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd1fff] usable May 14 18:10:19.925547 kernel: BIOS-e820: [mem 0x000000003ffd2000-0x000000003fffafff] ACPI data May 14 18:10:19.925554 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 14 18:10:19.925561 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 14 18:10:19.925567 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 14 18:10:19.925573 kernel: printk: legacy bootconsole [earlyser0] enabled May 14 18:10:19.925580 kernel: NX (Execute Disable) protection: active May 14 18:10:19.925591 kernel: APIC: Static calls initialized May 14 18:10:19.925599 kernel: efi: EFI v2.7 by Microsoft May 14 18:10:19.925606 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebb9a98 RNG=0x3ffd2018 May 14 18:10:19.925612 kernel: random: crng init done May 14 18:10:19.925619 kernel: secureboot: Secure boot disabled May 14 18:10:19.925626 kernel: SMBIOS 3.1.0 present. May 14 18:10:19.925633 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/21/2024 May 14 18:10:19.925641 kernel: DMI: Memory slots populated: 2/2 May 14 18:10:19.925650 kernel: Hypervisor detected: Microsoft Hyper-V May 14 18:10:19.925658 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 May 14 18:10:19.925665 kernel: Hyper-V: Nested features: 0x3e0101 May 14 18:10:19.925674 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 14 18:10:19.925681 kernel: Hyper-V: Using hypercall for remote TLB flush May 14 18:10:19.925688 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 14 18:10:19.925695 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 14 18:10:19.925704 kernel: tsc: Detected 2299.999 MHz processor May 14 18:10:19.925711 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:10:19.925718 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:10:19.925725 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 May 14 18:10:19.925737 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 18:10:19.925744 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:10:19.925752 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved May 14 18:10:19.925758 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 May 14 18:10:19.925765 kernel: Using GB pages for direct mapping May 14 18:10:19.925773 kernel: ACPI: Early table checksum verification disabled May 14 18:10:19.925780 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 14 18:10:19.925791 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 18:10:19.925802 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 18:10:19.925810 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 14 18:10:19.925818 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 14 18:10:19.925827 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 18:10:19.925834 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 18:10:19.925842 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 18:10:19.925848 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) May 14 18:10:19.925855 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) May 14 18:10:19.925862 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 18:10:19.925869 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 14 18:10:19.925876 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] May 14 18:10:19.925883 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 14 18:10:19.925889 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 14 18:10:19.925896 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 14 18:10:19.925905 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 14 18:10:19.925911 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] May 14 18:10:19.925918 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] May 14 18:10:19.925925 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 14 18:10:19.925932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 14 18:10:19.925939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] May 14 18:10:19.925946 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] May 14 18:10:19.925953 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] May 14 18:10:19.925959 kernel: Zone ranges: May 14 18:10:19.925967 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:10:19.925973 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 14 18:10:19.925980 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 14 18:10:19.925986 kernel: Device empty May 14 18:10:19.925993 kernel: Movable zone start for each node May 14 18:10:19.926000 kernel: Early memory node ranges May 14 18:10:19.926006 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 18:10:19.926013 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 14 18:10:19.926019 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd1fff] May 14 18:10:19.926027 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 14 18:10:19.926034 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 14 18:10:19.926041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 14 18:10:19.926047 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:10:19.926054 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 18:10:19.926088 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges May 14 18:10:19.926095 kernel: On node 0, zone DMA32: 45 pages in unavailable ranges May 14 18:10:19.926102 kernel: ACPI: PM-Timer IO Port: 0x408 May 14 18:10:19.926109 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:10:19.926118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:10:19.926124 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:10:19.926131 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 14 18:10:19.926138 kernel: TSC deadline timer available May 14 18:10:19.926145 kernel: CPU topo: Max. logical packages: 1 May 14 18:10:19.926152 kernel: CPU topo: Max. logical dies: 1 May 14 18:10:19.926159 kernel: CPU topo: Max. dies per package: 1 May 14 18:10:19.926165 kernel: CPU topo: Max. threads per core: 2 May 14 18:10:19.926172 kernel: CPU topo: Num. cores per package: 1 May 14 18:10:19.926180 kernel: CPU topo: Num. threads per package: 2 May 14 18:10:19.926187 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 14 18:10:19.926193 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 14 18:10:19.926200 kernel: Booting paravirtualized kernel on Hyper-V May 14 18:10:19.926207 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:10:19.926214 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 18:10:19.926221 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 14 18:10:19.926228 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 14 18:10:19.926235 kernel: pcpu-alloc: [0] 0 1 May 14 18:10:19.926243 kernel: Hyper-V: PV spinlocks enabled May 14 18:10:19.926251 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 18:10:19.926259 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:10:19.926267 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:10:19.926274 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 14 18:10:19.926282 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:10:19.926289 kernel: Fallback order for Node 0: 0 May 14 18:10:19.926296 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2096878 May 14 18:10:19.926304 kernel: Policy zone: Normal May 14 18:10:19.926312 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:10:19.926319 kernel: software IO TLB: area num 2. May 14 18:10:19.926326 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 18:10:19.926333 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:10:19.926340 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:10:19.926348 kernel: Dynamic Preempt: voluntary May 14 18:10:19.926355 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:10:19.926363 kernel: rcu: RCU event tracing is enabled. May 14 18:10:19.926371 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 18:10:19.926384 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:10:19.926392 kernel: Rude variant of Tasks RCU enabled. May 14 18:10:19.926401 kernel: Tracing variant of Tasks RCU enabled. May 14 18:10:19.926409 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:10:19.926416 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 18:10:19.926424 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:10:19.926432 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:10:19.926440 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:10:19.926448 kernel: Using NULL legacy PIC May 14 18:10:19.926455 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 14 18:10:19.926464 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:10:19.926472 kernel: Console: colour dummy device 80x25 May 14 18:10:19.926480 kernel: printk: legacy console [tty1] enabled May 14 18:10:19.926488 kernel: printk: legacy console [ttyS0] enabled May 14 18:10:19.926496 kernel: printk: legacy bootconsole [earlyser0] disabled May 14 18:10:19.926503 kernel: ACPI: Core revision 20240827 May 14 18:10:19.926512 kernel: Failed to register legacy timer interrupt May 14 18:10:19.926520 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:10:19.926528 kernel: x2apic enabled May 14 18:10:19.926535 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:10:19.926543 kernel: Hyper-V: Host Build 10.0.26100.1221-1-0 May 14 18:10:19.926551 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 14 18:10:19.926559 kernel: Hyper-V: Disabling IBT because of Hyper-V bug May 14 18:10:19.926567 kernel: Hyper-V: Using IPI hypercalls May 14 18:10:19.926575 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 14 18:10:19.926583 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 14 18:10:19.926591 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 14 18:10:19.926599 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 14 18:10:19.926607 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 14 18:10:19.926615 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 14 18:10:19.926623 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns May 14 18:10:19.926631 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) May 14 18:10:19.926639 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 18:10:19.926647 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 14 18:10:19.926656 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 14 18:10:19.926663 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:10:19.926671 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:10:19.926678 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:10:19.926686 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:10:19.926694 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 14 18:10:19.926701 kernel: RETBleed: Vulnerable May 14 18:10:19.926708 kernel: Speculative Store Bypass: Vulnerable May 14 18:10:19.926715 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:10:19.926723 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:10:19.926730 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:10:19.926739 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 14 18:10:19.926746 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 14 18:10:19.926753 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 14 18:10:19.926761 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' May 14 18:10:19.926768 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' May 14 18:10:19.926775 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' May 14 18:10:19.926782 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:10:19.926789 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 14 18:10:19.926797 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 14 18:10:19.926804 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 14 18:10:19.926812 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 May 14 18:10:19.926819 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 May 14 18:10:19.926826 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 May 14 18:10:19.926833 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. May 14 18:10:19.926841 kernel: Freeing SMP alternatives memory: 32K May 14 18:10:19.926848 kernel: pid_max: default: 32768 minimum: 301 May 14 18:10:19.926855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:10:19.926862 kernel: landlock: Up and running. May 14 18:10:19.926869 kernel: SELinux: Initializing. May 14 18:10:19.926877 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 18:10:19.926885 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 18:10:19.926892 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) May 14 18:10:19.926901 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. May 14 18:10:19.926909 kernel: signal: max sigframe size: 11952 May 14 18:10:19.926917 kernel: rcu: Hierarchical SRCU implementation. May 14 18:10:19.926925 kernel: rcu: Max phase no-delay instances is 400. May 14 18:10:19.926933 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:10:19.926940 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 14 18:10:19.926948 kernel: smp: Bringing up secondary CPUs ... May 14 18:10:19.926956 kernel: smpboot: x86: Booting SMP configuration: May 14 18:10:19.926964 kernel: .... node #0, CPUs: #1 May 14 18:10:19.926973 kernel: smp: Brought up 1 node, 2 CPUs May 14 18:10:19.926980 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 14 18:10:19.926989 kernel: Memory: 8082316K/8387512K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 299988K reserved, 0K cma-reserved) May 14 18:10:19.926996 kernel: devtmpfs: initialized May 14 18:10:19.927004 kernel: x86/mm: Memory block size: 128MB May 14 18:10:19.927012 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 14 18:10:19.927020 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:10:19.927028 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 18:10:19.927036 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:10:19.927045 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:10:19.927052 kernel: audit: initializing netlink subsys (disabled) May 14 18:10:19.927070 kernel: audit: type=2000 audit(1747246216.029:1): state=initialized audit_enabled=0 res=1 May 14 18:10:19.927084 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:10:19.927092 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:10:19.927099 kernel: cpuidle: using governor menu May 14 18:10:19.927107 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:10:19.927115 kernel: dca service started, version 1.12.1 May 14 18:10:19.927122 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 14 18:10:19.927131 kernel: e820: reserve RAM buffer [mem 0x3ffd2000-0x3fffffff] May 14 18:10:19.927139 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:10:19.927147 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:10:19.927155 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:10:19.927162 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:10:19.927169 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:10:19.927177 kernel: ACPI: Added _OSI(Module Device) May 14 18:10:19.927185 kernel: ACPI: Added _OSI(Processor Device) May 14 18:10:19.927193 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:10:19.927201 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:10:19.927209 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:10:19.927217 kernel: ACPI: Interpreter enabled May 14 18:10:19.927224 kernel: ACPI: PM: (supports S0 S5) May 14 18:10:19.927231 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:10:19.927238 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:10:19.927246 kernel: PCI: Ignoring E820 reservations for host bridge windows May 14 18:10:19.927253 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 14 18:10:19.927259 kernel: iommu: Default domain type: Translated May 14 18:10:19.927268 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:10:19.927275 kernel: efivars: Registered efivars operations May 14 18:10:19.927282 kernel: PCI: Using ACPI for IRQ routing May 14 18:10:19.927289 kernel: PCI: System does not support PCI May 14 18:10:19.927296 kernel: vgaarb: loaded May 14 18:10:19.927303 kernel: clocksource: Switched to clocksource tsc-early May 14 18:10:19.927310 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:10:19.927317 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:10:19.927324 kernel: pnp: PnP ACPI init May 14 18:10:19.927333 kernel: pnp: PnP ACPI: found 3 devices May 14 18:10:19.927340 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:10:19.927347 kernel: NET: Registered PF_INET protocol family May 14 18:10:19.927354 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 14 18:10:19.927361 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 14 18:10:19.927369 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:10:19.927376 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:10:19.927383 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 14 18:10:19.927390 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 14 18:10:19.927399 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 14 18:10:19.927406 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 14 18:10:19.927413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:10:19.927420 kernel: NET: Registered PF_XDP protocol family May 14 18:10:19.927427 kernel: PCI: CLS 0 bytes, default 64 May 14 18:10:19.927435 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 14 18:10:19.927442 kernel: software IO TLB: mapped [mem 0x000000003aa59000-0x000000003ea59000] (64MB) May 14 18:10:19.927448 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer May 14 18:10:19.927455 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules May 14 18:10:19.927463 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns May 14 18:10:19.927470 kernel: clocksource: Switched to clocksource tsc May 14 18:10:19.927477 kernel: Initialise system trusted keyrings May 14 18:10:19.927483 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 14 18:10:19.927490 kernel: Key type asymmetric registered May 14 18:10:19.927497 kernel: Asymmetric key parser 'x509' registered May 14 18:10:19.927503 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:10:19.927510 kernel: io scheduler mq-deadline registered May 14 18:10:19.927517 kernel: io scheduler kyber registered May 14 18:10:19.927525 kernel: io scheduler bfq registered May 14 18:10:19.927532 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:10:19.927539 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:10:19.927545 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:10:19.927552 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 14 18:10:19.927559 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:10:19.927566 kernel: i8042: PNP: No PS/2 controller found. May 14 18:10:19.927673 kernel: rtc_cmos 00:02: registered as rtc0 May 14 18:10:19.927737 kernel: rtc_cmos 00:02: setting system clock to 2025-05-14T18:10:19 UTC (1747246219) May 14 18:10:19.927793 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 14 18:10:19.927802 kernel: intel_pstate: Intel P-state driver initializing May 14 18:10:19.927810 kernel: efifb: probing for efifb May 14 18:10:19.927817 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 14 18:10:19.927825 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 14 18:10:19.927832 kernel: efifb: scrolling: redraw May 14 18:10:19.927840 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 18:10:19.927848 kernel: Console: switching to colour frame buffer device 128x48 May 14 18:10:19.927855 kernel: fb0: EFI VGA frame buffer device May 14 18:10:19.927862 kernel: pstore: Using crash dump compression: deflate May 14 18:10:19.927870 kernel: pstore: Registered efi_pstore as persistent store backend May 14 18:10:19.927877 kernel: NET: Registered PF_INET6 protocol family May 14 18:10:19.927885 kernel: Segment Routing with IPv6 May 14 18:10:19.927892 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:10:19.927899 kernel: NET: Registered PF_PACKET protocol family May 14 18:10:19.927907 kernel: Key type dns_resolver registered May 14 18:10:19.927915 kernel: IPI shorthand broadcast: enabled May 14 18:10:19.927922 kernel: sched_clock: Marking stable (2738325910, 84186651)->(3109479110, -286966549) May 14 18:10:19.927929 kernel: registered taskstats version 1 May 14 18:10:19.927936 kernel: Loading compiled-in X.509 certificates May 14 18:10:19.927943 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:10:19.927951 kernel: Demotion targets for Node 0: null May 14 18:10:19.927958 kernel: Key type .fscrypt registered May 14 18:10:19.927965 kernel: Key type fscrypt-provisioning registered May 14 18:10:19.927973 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:10:19.927981 kernel: ima: Allocated hash algorithm: sha1 May 14 18:10:19.927988 kernel: ima: No architecture policies found May 14 18:10:19.927995 kernel: clk: Disabling unused clocks May 14 18:10:19.928002 kernel: Warning: unable to open an initial console. May 14 18:10:19.928010 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:10:19.928017 kernel: Write protecting the kernel read-only data: 24576k May 14 18:10:19.928025 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:10:19.928032 kernel: Run /init as init process May 14 18:10:19.928040 kernel: with arguments: May 14 18:10:19.928048 kernel: /init May 14 18:10:19.928055 kernel: with environment: May 14 18:10:19.928082 kernel: HOME=/ May 14 18:10:19.928089 kernel: TERM=linux May 14 18:10:19.928097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:10:19.928106 systemd[1]: Successfully made /usr/ read-only. May 14 18:10:19.928116 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:10:19.928124 systemd[1]: Detected virtualization microsoft. May 14 18:10:19.928133 systemd[1]: Detected architecture x86-64. May 14 18:10:19.928141 systemd[1]: Running in initrd. May 14 18:10:19.928148 systemd[1]: No hostname configured, using default hostname. May 14 18:10:19.928156 systemd[1]: Hostname set to . May 14 18:10:19.928164 systemd[1]: Initializing machine ID from random generator. May 14 18:10:19.928171 systemd[1]: Queued start job for default target initrd.target. May 14 18:10:19.928178 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:10:19.928186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:10:19.928196 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:10:19.928204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:10:19.928211 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:10:19.928220 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:10:19.928229 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:10:19.928237 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:10:19.928246 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:10:19.928253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:10:19.928261 systemd[1]: Reached target paths.target - Path Units. May 14 18:10:19.928269 systemd[1]: Reached target slices.target - Slice Units. May 14 18:10:19.928277 systemd[1]: Reached target swap.target - Swaps. May 14 18:10:19.928284 systemd[1]: Reached target timers.target - Timer Units. May 14 18:10:19.928292 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:10:19.928300 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:10:19.928308 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:10:19.928317 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:10:19.928324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:10:19.928332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:10:19.928340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:10:19.928348 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:10:19.928355 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:10:19.928363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:10:19.928371 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:10:19.928379 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:10:19.928388 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:10:19.928395 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:10:19.928403 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:10:19.928432 systemd-journald[205]: Collecting audit messages is disabled. May 14 18:10:19.928454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:10:19.928463 systemd-journald[205]: Journal started May 14 18:10:19.928485 systemd-journald[205]: Runtime Journal (/run/log/journal/062f39e694e44974a90e8897b41abe30) is 8M, max 159M, 151M free. May 14 18:10:19.931094 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:10:19.941005 systemd-modules-load[207]: Inserted module 'overlay' May 14 18:10:19.949838 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:10:19.951125 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:10:19.956153 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:10:19.961142 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:10:19.966200 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:10:19.970792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:10:19.981075 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:10:19.982502 systemd-modules-load[207]: Inserted module 'br_netfilter' May 14 18:10:19.984569 kernel: Bridge firewalling registered May 14 18:10:19.986171 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:10:19.988557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:10:19.991728 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:10:19.994588 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:10:20.003161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:10:20.007165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:10:20.009154 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:10:20.020452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:10:20.024377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:10:20.028907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:10:20.033159 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:10:20.034541 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:10:20.055985 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:10:20.061333 systemd-resolved[236]: Positive Trust Anchors: May 14 18:10:20.061340 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:10:20.061619 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:10:20.064069 systemd-resolved[236]: Defaulting to hostname 'linux'. May 14 18:10:20.064740 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:10:20.074022 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:10:20.125078 kernel: SCSI subsystem initialized May 14 18:10:20.132075 kernel: Loading iSCSI transport class v2.0-870. May 14 18:10:20.140082 kernel: iscsi: registered transport (tcp) May 14 18:10:20.155497 kernel: iscsi: registered transport (qla4xxx) May 14 18:10:20.155531 kernel: QLogic iSCSI HBA Driver May 14 18:10:20.166799 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:10:20.175735 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:10:20.176710 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:10:20.204463 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:10:20.207158 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:10:20.251076 kernel: raid6: avx512x4 gen() 48164 MB/s May 14 18:10:20.268078 kernel: raid6: avx512x2 gen() 47179 MB/s May 14 18:10:20.285071 kernel: raid6: avx512x1 gen() 30178 MB/s May 14 18:10:20.303070 kernel: raid6: avx2x4 gen() 42265 MB/s May 14 18:10:20.320077 kernel: raid6: avx2x2 gen() 44365 MB/s May 14 18:10:20.337557 kernel: raid6: avx2x1 gen() 32844 MB/s May 14 18:10:20.337573 kernel: raid6: using algorithm avx512x4 gen() 48164 MB/s May 14 18:10:20.355424 kernel: raid6: .... xor() 7781 MB/s, rmw enabled May 14 18:10:20.355443 kernel: raid6: using avx512x2 recovery algorithm May 14 18:10:20.372077 kernel: xor: automatically using best checksumming function avx May 14 18:10:20.473075 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:10:20.476664 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:10:20.479996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:10:20.501972 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 14 18:10:20.506210 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:10:20.512110 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:10:20.531006 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation May 14 18:10:20.546343 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:10:20.549166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:10:20.578138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:10:20.584235 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:10:20.621077 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:10:20.628076 kernel: AES CTR mode by8 optimization enabled May 14 18:10:20.651375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:10:20.656838 kernel: hv_vmbus: Vmbus version:5.3 May 14 18:10:20.651471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:10:20.654346 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:10:20.659550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:10:20.669120 kernel: hv_vmbus: registering driver hyperv_keyboard May 14 18:10:20.677861 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:10:20.679835 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:10:20.692849 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 18:10:20.692871 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 18:10:20.692882 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 14 18:10:20.695246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:10:20.710488 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 18:10:20.710519 kernel: PTP clock support registered May 14 18:10:20.723079 kernel: hv_vmbus: registering driver hv_pci May 14 18:10:20.727204 kernel: hv_vmbus: registering driver hv_netvsc May 14 18:10:20.727461 kernel: hv_utils: Registering HyperV Utility Driver May 14 18:10:20.729347 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 May 14 18:10:20.478302 kernel: hv_vmbus: registering driver hv_utils May 14 18:10:20.481623 kernel: hv_utils: Heartbeat IC version 3.0 May 14 18:10:20.481638 kernel: hv_utils: Shutdown IC version 3.2 May 14 18:10:20.481647 kernel: hv_utils: TimeSync IC version 4.0 May 14 18:10:20.481655 kernel: hv_vmbus: registering driver hid_hyperv May 14 18:10:20.481664 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52778e03 (unnamed net_device) (uninitialized): VF slot 1 added May 14 18:10:20.481791 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 May 14 18:10:20.481880 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] May 14 18:10:20.481977 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] May 14 18:10:20.482057 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 14 18:10:20.482065 kernel: hv_vmbus: registering driver hv_storvsc May 14 18:10:20.482074 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 14 18:10:20.482149 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint May 14 18:10:20.482255 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] May 14 18:10:20.482351 kernel: scsi host0: storvsc_host_t May 14 18:10:20.482432 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 14 18:10:20.482521 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) May 14 18:10:20.482615 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 May 14 18:10:20.483929 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned May 14 18:10:20.484039 systemd-journald[205]: Time jumped backwards, rotating. May 14 18:10:20.780289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:10:20.427592 systemd-resolved[236]: Clock change detected. Flushing caches. May 14 18:10:20.493157 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 14 18:10:20.494337 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 18:10:20.494349 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 14 18:10:20.497813 kernel: nvme nvme0: pci function c05b:00:00.0 May 14 18:10:20.500131 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) May 14 18:10:20.780641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 14 18:10:20.780769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#171 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 14 18:10:20.780853 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 14 18:10:20.780947 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 18:10:21.154696 kernel: nvme nvme0: using unchecked data buffer May 14 18:10:21.452410 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 May 14 18:11:23.529573 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 May 14 18:11:23.529650 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] May 14 18:11:23.529669 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] May 14 18:11:23.529781 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint May 14 18:11:23.529870 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] May 14 18:11:23.529883 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] May 14 18:11:23.529896 kernel: pci 7870:00:00.0: enabling Extended Tags May 14 18:11:23.529912 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 May 14 18:11:23.529924 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned May 14 18:11:23.529937 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned May 14 18:11:23.529953 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) May 14 18:11:23.529967 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 18:11:23.529975 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 18:11:23.529983 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 18:11:23.529990 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:11:23.529999 kernel: device-mapper: uevent: version 1.0.3 May 14 18:11:23.530009 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:11:23.530017 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 14 18:11:23.530026 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:11:23.530034 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (721) May 14 18:11:23.530043 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:11:23.530051 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:11:23.530059 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:11:23.530067 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (752) May 14 18:11:23.530075 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:11:23.530085 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 14 18:11:23.530096 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 14 18:11:23.530105 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:11:23.530114 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:11:23.530124 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (922) May 14 18:11:23.530133 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:11:23.530209 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 14 18:11:23.530222 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 14 18:11:23.530235 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:11:23.530245 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1046) May 14 18:11:23.530256 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:11:23.530266 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 14 18:11:23.530273 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 14 18:11:23.530281 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:11:23.530289 kernel: SELinux: policy capability open_perms=1 May 14 18:11:23.530297 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:11:23.530305 kernel: SELinux: policy capability always_check_network=0 May 14 18:11:23.530316 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:11:23.530324 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:11:23.530332 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:11:23.530339 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:11:23.530348 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:11:23.530357 kernel: audit: type=1403 audit(1747246240.431:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:11:23.530366 systemd[1]: Successfully loaded SELinux policy in 67.891ms. May 14 18:11:23.530378 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.627ms. May 14 18:11:23.530387 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:11:23.530400 systemd[1]: Detected virtualization microsoft. May 14 18:11:23.530410 systemd[1]: Detected architecture x86-64. May 14 18:11:23.530419 systemd[1]: Detected first boot. May 14 18:11:23.530429 systemd[1]: Hostname set to . May 14 18:11:23.530438 systemd[1]: Initializing machine ID from random generator. May 14 18:11:23.530447 zram_generator::config[1142]: No configuration found. May 14 18:11:23.530458 kernel: Guest personality initialized and is inactive May 14 18:11:23.530468 kernel: VMCI host device registered (name=vmci, major=10, minor=124) May 14 18:11:23.530477 kernel: Initialized host personality May 14 18:11:23.530485 kernel: NET: Registered PF_VSOCK protocol family May 14 18:11:23.530494 systemd[1]: Populated /etc with preset unit settings. May 14 18:11:23.530503 kernel: fuse: init (API version 7.41) May 14 18:11:23.530512 kernel: loop: module loaded May 14 18:11:23.530522 kernel: ACPI: bus type drm_connector registered May 14 18:11:23.530530 kernel: loop0: detected capacity change from 0 to 28536 May 14 18:11:23.530539 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:11:23.530549 kernel: loop1: detected capacity change from 0 to 218376 May 14 18:11:23.530558 kernel: loop2: detected capacity change from 0 to 113872 May 14 18:11:23.530566 kernel: loop3: detected capacity change from 0 to 146240 May 14 18:11:23.530574 kernel: loop4: detected capacity change from 0 to 28536 May 14 18:11:23.530582 kernel: loop5: detected capacity change from 0 to 218376 May 14 18:11:23.530589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 14 18:11:23.530605 kernel: hv_vmbus: registering driver hyperv_fb May 14 18:11:23.530614 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 14 18:11:23.530623 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 14 18:11:23.530633 kernel: Console: switching to colour dummy device 80x25 May 14 18:11:23.530643 kernel: Console: switching to colour frame buffer device 128x48 May 14 18:11:23.530651 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:11:23.530659 kernel: hv_vmbus: registering driver hv_balloon May 14 18:11:23.530667 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 14 18:11:23.530676 kernel: kvm_intel: Using Hyper-V Enlightened VMCS May 14 18:11:23.530735 kernel: loop6: detected capacity change from 0 to 113872 May 14 18:11:23.530744 kernel: loop7: detected capacity change from 0 to 146240 May 14 18:11:23.530753 zram_generator::config[1387]: No configuration found. May 14 18:11:23.530768 zram_generator::config[1464]: No configuration found. May 14 18:11:23.530780 kernel: mana 7870:00:00.0: Failed to establish HWC: -110 May 14 18:11:23.530796 kernel: mana 7870:00:00.0: gdma probe failed: err = -110 May 14 18:11:23.530811 kernel: mana 7870:00:00.0: probe with driver mana failed with error -110 May 14 18:11:23.530825 systemd-journald[205]: Received SIGTERM from PID 1 (n/a). May 14 18:10:25.879550 ignition[889]: Ignition 2.21.0 May 14 18:10:21.487596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. May 14 18:10:25.879555 ignition[889]: Stage: fetch-offline May 14 18:10:21.537090 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. May 14 18:10:25.879624 ignition[889]: no configs at "/usr/lib/ignition/base.d" May 14 18:10:21.735661 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. May 14 18:11:23.540643 disk-uuid[675]: The operation has completed successfully. May 14 18:11:23.540718 systemd-journald[205]: Received client request to flush runtime journal. May 14 18:10:25.879629 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 18:10:21.763621 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. May 14 18:10:25.879730 ignition[889]: parsed url from cmdline: "" May 14 18:10:21.764930 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. May 14 18:11:23.540945 sh[709]: Success May 14 18:10:25.879732 ignition[889]: no config URL provided May 14 18:10:21.766260 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:10:25.879735 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:10:21.767988 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:10:25.879740 ignition[889]: no config at "/usr/lib/ignition/user.ign" May 14 18:10:21.770776 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:10:25.879743 ignition[889]: failed to fetch config: resource requires networking May 14 18:10:21.773727 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:10:25.879877 ignition[889]: Ignition finished successfully May 14 18:10:21.777229 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:10:25.903032 ignition[898]: Ignition 2.21.0 May 14 18:10:21.780808 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:10:25.903036 ignition[898]: Stage: fetch May 14 18:10:21.791961 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:10:25.903160 ignition[898]: no configs at "/usr/lib/ignition/base.d" May 14 18:10:22.853660 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:10:25.903167 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 18:10:22.853810 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:10:25.903211 ignition[898]: parsed url from cmdline: "" May 14 18:10:22.884388 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:11:23.543625 systemd-fsck[915]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 14 18:10:25.903213 ignition[898]: no config URL provided May 14 18:10:23.364839 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:10:25.903216 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:10:23.369766 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:10:25.903219 ignition[898]: no config at "/usr/lib/ignition/user.ign" May 14 18:10:23.380133 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:11:23.543949 coreos-metadata[924]: May 14 18:10:27.750 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 18:11:23.543949 coreos-metadata[924]: May 14 18:10:27.757 INFO Fetch successful May 14 18:11:23.543949 coreos-metadata[924]: May 14 18:10:27.757 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 14 18:11:23.543949 coreos-metadata[924]: May 14 18:10:27.765 INFO Fetch successful May 14 18:11:23.543949 coreos-metadata[924]: May 14 18:10:27.768 INFO wrote hostname ci-4334.0.0-a-ef358d086b to /sysroot/etc/hostname May 14 18:10:25.903241 ignition[898]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 14 18:10:24.048494 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:10:25.989199 ignition[898]: GET result: OK May 14 18:10:24.050903 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:11:23.544217 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:10:25.989270 ignition[898]: config has been read from IMDS userdata May 14 18:10:24.053746 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:11:23.544345 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory May 14 18:10:25.989296 ignition[898]: parsing config with SHA512: 8672a6ec3a3d00d6ac95faee1aa9c519945e0300ba8ef65b3c71e4337889b274855a18d93927e0c49adebe9dea4e297a0126b744c04aae7df2041352d93e36bf May 14 18:10:24.054309 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:11:23.544475 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:11:23.544526 ignition[1037]: INFO : Ignition 2.21.0 May 14 18:11:23.544526 ignition[1037]: INFO : Stage: mount May 14 18:11:23.544526 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:11:23.544526 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 18:11:23.544526 ignition[1037]: INFO : mount: mount passed May 14 18:11:23.544526 ignition[1037]: INFO : Ignition finished successfully May 14 18:10:25.994280 ignition[898]: fetch: fetch complete May 14 18:10:24.056796 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:11:23.544801 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:10:25.994284 ignition[898]: fetch: fetch passed May 14 18:10:24.117044 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:10:25.994316 ignition[898]: Ignition finished successfully May 14 18:10:24.120601 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:11:23.545028 ignition[1062]: INFO : Ignition 2.21.0 May 14 18:11:23.545028 ignition[1062]: INFO : Stage: files May 14 18:11:23.545028 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:11:23.545028 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 18:11:23.545028 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping May 14 18:11:23.545028 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:11:23.545028 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:11:23.545028 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:11:23.545028 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:11:23.545028 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:11:23.545028 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:10:26.018695 ignition[903]: Ignition 2.21.0 May 14 18:10:24.151305 systemd-networkd[868]: lo: Link UP May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 18:11:23.546053 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 18:11:23.546053 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 18:11:23.546053 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:11:23.546053 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:11:23.546053 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 18:11:23.546053 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 14 18:11:23.546053 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:11:23.546053 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:10:26.018703 ignition[903]: Stage: kargs May 14 18:10:24.151307 systemd-networkd[868]: lo: Gained carrier May 14 18:11:23.546367 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:11:23.546367 initrd-setup-root-after-ignition[1089]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:11:23.546503 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:11:23.546503 ignition[1062]: INFO : files: files passed May 14 18:11:23.546503 ignition[1062]: INFO : Ignition finished successfully May 14 18:10:26.019058 ignition[903]: no configs at "/usr/lib/ignition/base.d" May 14 18:10:24.151866 systemd-networkd[868]: Enumeration completed May 14 18:11:23.546933 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:10:26.019067 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 18:10:24.152077 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:10:26.019992 ignition[903]: kargs: kargs passed May 14 18:10:24.152103 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:10:26.020030 ignition[903]: Ignition finished successfully May 14 18:10:24.152106 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:10:26.039941 ignition[908]: Ignition 2.21.0 May 14 18:10:24.152799 systemd-networkd[868]: eth0: Link UP May 14 18:10:26.039945 ignition[908]: Stage: disks May 14 18:10:24.152870 systemd-networkd[868]: eth0: Gained carrier May 14 18:10:26.040075 ignition[908]: no configs at "/usr/lib/ignition/base.d" May 14 18:10:24.152879 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:11:23.548050 ignition[1107]: INFO : Ignition 2.21.0 May 14 18:11:23.548050 ignition[1107]: INFO : Stage: umount May 14 18:11:23.548050 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:11:23.548050 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 18:11:23.548050 ignition[1107]: INFO : umount: umount passed May 14 18:11:23.548050 ignition[1107]: INFO : Ignition finished successfully May 14 18:10:26.040079 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 18:10:24.155857 systemd[1]: Reached target network.target - Network. May 14 18:10:26.040809 ignition[908]: disks: disks passed May 14 18:10:24.167721 systemd-networkd[868]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 14 18:10:26.040849 ignition[908]: Ignition finished successfully May 14 18:10:24.510174 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:10:24.513319 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:10:25.467904 systemd-networkd[868]: eth0: Gained IPv6LL May 14 18:10:25.880672 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:10:25.882737 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 18:10:25.994013 unknown[898]: fetched base config from "system" May 14 18:10:25.994019 unknown[898]: fetched base config from "system" May 14 18:10:25.994023 unknown[898]: fetched user config from "azure" May 14 18:10:25.995659 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 18:10:25.998303 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:10:26.020766 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:10:26.023037 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:10:26.041362 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:10:26.042367 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:10:26.043721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:10:26.046716 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:10:26.047865 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:10:26.050771 systemd[1]: Reached target basic.target - Basic System. May 14 18:10:26.053258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:10:26.179144 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:10:26.184314 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:10:26.615819 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:10:26.617132 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:10:26.650884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:10:26.653904 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:11:23.550366 ldconfig[1257]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:10:26.658421 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 18:10:26.660539 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:10:26.660632 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:10:26.663487 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:10:26.665066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:10:26.683185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:10:27.770004 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 18:10:29.573357 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:10:29.575575 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:10:29.577828 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:10:29.587705 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:10:29.606845 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:10:29.613029 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:10:29.614615 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:10:29.627962 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:10:29.656709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:10:29.731654 unknown[1062]: wrote ssh authorized keys file for user: core May 14 18:10:32.189202 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:10:32.191292 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:10:32.194348 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:10:32.209456 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:10:32.209536 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:10:32.215870 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:10:32.219819 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:10:32.221647 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:10:32.256842 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:10:32.256904 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:10:32.259776 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:10:32.262751 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:10:32.264037 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:10:32.264509 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:10:32.281613 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:10:32.292282 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:10:32.308257 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:10:32.309923 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:10:32.311432 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:10:32.312608 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:10:32.312723 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:10:32.314271 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:10:32.315621 systemd[1]: Stopped target basic.target - Basic System. May 14 18:10:32.316797 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:10:32.318296 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:10:32.321918 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:10:32.323571 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:10:32.326779 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:10:32.329776 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:10:32.331273 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:10:32.333786 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:10:32.335050 systemd[1]: Stopped target swap.target - Swaps. May 14 18:10:32.337749 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:10:32.337839 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:10:32.339413 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:10:32.341765 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:10:32.343238 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:10:32.343330 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:10:32.344841 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:10:32.344938 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:10:32.346368 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:10:32.346467 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:10:32.348007 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:10:32.348079 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:10:32.350793 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 18:10:32.350874 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 18:10:32.354353 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:10:32.357269 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:10:32.359780 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:10:32.359949 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:10:32.362544 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:10:32.362659 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:10:32.369134 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:10:32.369206 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:10:32.379405 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:10:32.379485 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:10:32.384390 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:10:32.384627 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:10:32.384650 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:10:32.385981 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:10:32.386010 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:10:32.387289 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 18:10:32.387318 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 18:10:32.389734 systemd[1]: Stopped target network.target - Network. May 14 18:10:32.390673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:10:32.390724 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:10:32.393723 systemd[1]: Stopped target paths.target - Path Units. May 14 18:10:32.394733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:10:32.400708 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:10:32.402234 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:10:32.404709 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:10:32.407728 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:10:32.407750 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:10:32.410721 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:10:32.410739 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:10:32.413713 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:10:32.413743 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:10:32.416729 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:10:32.416753 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:10:32.418043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:10:32.420758 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:10:32.426430 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:10:32.426498 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:10:32.430427 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:10:32.430558 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:10:32.430620 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:10:32.435057 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:10:32.435390 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:10:32.436616 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:10:32.436642 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:10:32.440243 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:10:32.441879 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:10:32.441934 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:10:32.443697 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:10:32.443743 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:10:32.445134 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:10:32.445164 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:10:32.446443 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:10:32.446479 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:10:32.448069 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:10:32.451413 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:10:32.451467 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:10:32.462930 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:10:32.463036 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:10:32.465288 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:10:32.465330 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:10:32.467758 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:10:32.467787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:10:32.469150 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:10:32.469182 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:10:32.470486 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:10:32.470517 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:10:32.473809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:10:32.473840 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:10:32.475870 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:10:32.478129 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:10:32.478183 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:10:32.480039 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:10:32.480079 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:10:32.481981 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 18:10:32.482012 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:10:32.483241 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:10:32.483273 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:10:32.484666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:10:32.484710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:10:32.487610 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:10:32.487658 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 14 18:10:32.487709 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:10:32.487742 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:10:32.487977 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:10:32.488059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:10:32.490232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:10:32.490288 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:10:36.332171 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:10:36.332267 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:10:36.336867 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:10:36.340735 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:10:36.340783 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:10:36.344325 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:10:36.358056 systemd[1]: Switching root. May 14 18:10:41.671904 systemd[1]: Queued start job for default target multi-user.target. May 14 18:10:41.675920 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 14 18:10:41.676163 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:10:41.676271 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:10:41.679246 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:10:41.680831 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:10:41.683984 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:10:41.686974 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:10:41.688568 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:10:41.690331 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:10:41.693971 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:10:41.696925 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:10:41.699796 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:10:41.701351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:10:41.704738 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:10:41.706728 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:10:41.709832 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:10:41.711323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:10:41.712640 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:10:41.714729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:10:41.716141 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:10:41.717373 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:10:41.719719 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:10:41.722718 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:10:41.724072 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:10:41.725535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:10:41.728704 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:10:41.731709 systemd[1]: Reached target slices.target - Slice Units. May 14 18:10:41.732699 systemd[1]: Reached target swap.target - Swaps. May 14 18:10:41.733637 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:10:41.737960 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:10:41.741220 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:10:41.745215 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:10:41.747277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:10:41.749796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:10:41.751629 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:10:41.889319 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:10:41.892309 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:10:41.895757 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:10:41.898772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:10:41.900072 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:10:41.906476 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:10:41.910252 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:10:41.912784 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:10:41.912813 systemd[1]: Reached target machines.target - Containers. May 14 18:10:41.914787 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:10:41.918485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:10:41.920789 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:10:41.923797 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:10:41.932856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:10:41.936781 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:10:41.940848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:10:41.943908 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:10:41.954513 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:10:41.957820 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:10:41.957862 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:10:41.957912 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:10:41.961796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:10:41.961837 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:10:41.965718 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:10:41.973225 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:10:41.975434 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:10:41.978813 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:10:41.983784 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:10:41.989175 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:10:41.992763 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:10:41.992812 systemd[1]: Stopped verity-setup.service. May 14 18:10:41.995798 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:10:41.998244 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:10:42.000856 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:10:42.003878 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:10:42.006924 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:10:42.010601 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:10:42.016047 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:10:42.017730 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:10:42.020871 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:10:42.021493 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:10:42.025186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:10:42.025314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:10:42.029028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:10:42.029742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:10:42.032979 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:10:42.033744 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:10:42.037029 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:10:42.037149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:10:42.039721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:10:42.042979 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:10:42.046064 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:10:42.047942 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:10:42.049999 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:10:42.053753 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:10:42.058790 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:10:42.061767 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:10:42.061791 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:10:42.067015 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:10:42.073456 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:10:42.074698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:10:42.075430 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:10:42.079512 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:10:42.081207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:10:42.093827 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:10:42.096327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:10:42.097117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:10:42.100914 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:10:42.103818 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:10:42.107724 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:10:42.109813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:10:42.111994 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:10:42.116430 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:10:42.142294 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:10:42.289449 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:10:42.292634 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 14 18:10:42.292644 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 14 18:10:42.293966 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:10:42.298021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:10:42.301399 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:10:42.331375 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:10:42.341788 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:10:42.345496 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:10:42.368698 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:10:42.371801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:10:42.381434 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:10:42.391940 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 18:10:42.391947 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 18:10:42.394672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:10:42.996823 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:10:45.312222 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:10:45.315003 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:10:45.342784 systemd-udevd[1276]: Using default interface naming scheme 'v255'. May 14 18:10:46.239123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:10:46.244122 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:10:46.273766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:10:46.382826 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:10:46.590252 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:10:46.819973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:10:46.845405 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:10:46.845653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:10:46.850639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:10:47.587903 systemd-networkd[1285]: lo: Link UP May 14 18:10:47.587907 systemd-networkd[1285]: lo: Gained carrier May 14 18:10:47.588850 systemd-networkd[1285]: Enumeration completed May 14 18:10:47.588926 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:10:47.589344 systemd-networkd[1285]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:10:47.589347 systemd-networkd[1285]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:10:47.589913 systemd-networkd[1285]: eth0: Link UP May 14 18:10:47.589917 systemd-networkd[1285]: eth0: Gained carrier May 14 18:10:47.589928 systemd-networkd[1285]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:10:47.592249 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:10:47.598247 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:10:47.606734 systemd-networkd[1285]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 14 18:10:47.709494 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:10:47.804242 (sd-merge)[1278]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 14 18:10:47.804587 (sd-merge)[1278]: Merged extensions into '/usr'. May 14 18:10:47.816036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. May 14 18:10:47.817972 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:10:47.882846 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:10:47.882854 systemd[1]: Reloading... May 14 18:10:48.076968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:10:48.166102 systemd[1]: Reloading finished in 283 ms. May 14 18:10:48.200080 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:10:48.201619 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:10:48.213444 systemd[1]: Starting ensure-sysext.service... May 14 18:10:48.221098 systemd[1]: Reload requested from client PID 1445 ('systemctl') (unit ensure-sysext.service)... May 14 18:10:48.221105 systemd[1]: Reloading... May 14 18:10:48.339018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:10:48.413694 systemd[1]: Reloading finished in 192 ms. May 14 18:10:48.437952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:10:48.457024 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:10:48.457158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:10:48.457970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:10:48.461796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:10:48.465751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:10:48.467593 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:10:48.467711 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:10:48.467799 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:10:48.469402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:10:48.469526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:10:48.472133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:10:48.477813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:10:48.481009 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:10:48.481107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:10:48.482799 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:10:48.482912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:10:48.483623 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:10:48.486787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:10:48.489453 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:10:48.489622 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:10:48.489749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:10:48.493334 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:10:48.493558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:10:48.494339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:10:48.496379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:10:48.499813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:10:48.499919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:10:48.500060 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:10:48.502142 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:10:48.503058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:10:48.503157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:10:48.504874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:10:48.504966 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:10:48.507993 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:10:48.508085 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:10:48.509505 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:10:48.509601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:10:48.512989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:10:48.513066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:10:48.514358 systemd[1]: Finished ensure-sysext.service. May 14 18:10:48.635798 systemd-networkd[1285]: eth0: Gained IPv6LL May 14 18:10:48.641445 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:10:52.841322 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:11:23.540953 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:11:23.564085 systemd-journald[205]: Journal stopped May 14 18:11:23.638332 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:11:23.638351 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:11:23.638360 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:11:23.638374 systemd-journald[1622]: 3 unknown file descriptors passed, closing. May 14 18:11:23.638392 systemd-journald[1622]: Collecting audit messages is disabled. May 14 18:11:23.638403 systemd-journald[1622]: Journal started May 14 18:11:23.638417 systemd-journald[1622]: Runtime Journal (/run/log/journal/1ce1a23182fc46af8a12e13a4ef26baf) is 8M, max 159M, 151M free. May 14 18:11:23.569476 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:11:23.639847 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:11:23.641595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:11:23.666935 systemd-tmpfiles[1624]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:11:23.666957 systemd-tmpfiles[1624]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:11:23.667130 systemd-tmpfiles[1624]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:11:23.667277 systemd-tmpfiles[1624]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:11:23.667728 systemd-tmpfiles[1624]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:11:23.667890 systemd-tmpfiles[1624]: ACLs are not supported, ignoring. May 14 18:11:23.667943 systemd-tmpfiles[1624]: ACLs are not supported, ignoring. May 14 18:11:23.670507 systemd-tmpfiles[1624]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:11:23.670515 systemd-tmpfiles[1624]: Skipping /boot May 14 18:11:23.675210 systemd-tmpfiles[1624]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:11:23.675220 systemd-tmpfiles[1624]: Skipping /boot May 14 18:11:23.686348 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:11:23.688760 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:11:23.691032 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:11:23.694269 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:11:23.700847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:11:23.704234 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:11:23.729865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:11:23.740624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:11:23.744780 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:11:23.760260 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:11:23.982301 systemd-resolved[1630]: Positive Trust Anchors: May 14 18:11:23.982310 systemd-resolved[1630]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:11:23.982342 systemd-resolved[1630]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:11:23.986443 systemd-resolved[1630]: Using system hostname 'ci-4334.0.0-a-ef358d086b'. May 14 18:11:23.988145 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:11:23.989395 systemd[1]: Reached target network.target - Network. May 14 18:11:23.990270 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:11:23.991457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:11:23.993453 augenrules[1656]: No rules May 14 18:11:23.993912 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:11:23.994058 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:11:24.071471 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:11:24.074865 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:11:24.074897 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:11:24.076346 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:11:24.077834 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:11:24.079304 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:11:24.081819 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:11:24.084792 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:11:24.087751 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:11:24.088877 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:11:24.088903 systemd[1]: Reached target paths.target - Path Units. May 14 18:11:24.092742 systemd[1]: Reached target timers.target - Timer Units. May 14 18:11:24.094325 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:11:24.096471 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:11:24.099436 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:11:24.101159 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:11:24.104748 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:11:24.114068 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:11:24.116099 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:11:24.118253 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:11:24.121355 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:11:24.122560 systemd[1]: Reached target basic.target - Basic System. May 14 18:11:24.124762 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:11:24.124783 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:11:24.126439 systemd[1]: Starting chronyd.service - NTP client/server... May 14 18:11:24.129783 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:11:24.133974 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 18:11:24.137539 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:11:24.142768 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:11:24.147759 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:11:24.151554 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:11:24.153993 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:11:24.155173 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:11:24.159855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:11:24.165520 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:11:24.168219 jq[1673]: false May 14 18:11:24.173494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:11:24.177760 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:11:24.181950 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:11:24.191772 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:11:24.199295 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:11:24.201572 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:11:24.203824 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:11:24.205785 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:11:24.209811 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:11:24.212358 (chronyd)[1665]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 14 18:11:24.215929 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:11:24.222852 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:11:24.225988 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:11:24.226758 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:11:24.235004 jq[1690]: true May 14 18:11:24.243423 jq[1701]: true May 14 18:11:24.281380 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:11:24.288898 extend-filesystems[1674]: Found loop4 May 14 18:11:24.288898 extend-filesystems[1674]: Found loop5 May 14 18:11:24.288898 extend-filesystems[1674]: Found loop6 May 14 18:11:24.288898 extend-filesystems[1674]: Found loop7 May 14 18:11:24.295828 extend-filesystems[1674]: Found sr0 May 14 18:11:24.295828 extend-filesystems[1674]: Found nvme0n1 May 14 18:11:24.295828 extend-filesystems[1674]: Found nvme0n1p1 May 14 18:11:24.295828 extend-filesystems[1674]: Found nvme0n1p2 May 14 18:11:24.295828 extend-filesystems[1674]: Found nvme0n1p3 May 14 18:11:24.295828 extend-filesystems[1674]: Found usr May 14 18:11:24.295828 extend-filesystems[1674]: Found nvme0n1p4 May 14 18:11:24.295828 extend-filesystems[1674]: Found nvme0n1p6 May 14 18:11:24.295828 extend-filesystems[1674]: Found nvme0n1p7 May 14 18:11:24.295828 extend-filesystems[1674]: Found nvme0n1p9 May 14 18:11:24.295828 extend-filesystems[1674]: Checking size of /dev/nvme0n1p9 May 14 18:11:24.353655 chronyd[1729]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 14 18:11:24.354633 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:11:24.356113 chronyd[1729]: Timezone right/UTC failed leap second check, ignoring May 14 18:11:24.356241 chronyd[1729]: Loaded seccomp filter (level 2) May 14 18:11:24.357907 (ntainerd)[1724]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:11:24.358048 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:11:24.360131 systemd[1]: Started chronyd.service - NTP client/server. May 14 18:11:24.394938 google_oslogin_nss_cache[1675]: oslogin_cache_refresh[1675]: Refreshing passwd entry cache May 14 18:11:24.394788 oslogin_cache_refresh[1675]: Refreshing passwd entry cache May 14 18:11:24.402585 google_oslogin_nss_cache[1675]: oslogin_cache_refresh[1675]: Failure getting users, quitting May 14 18:11:24.402585 google_oslogin_nss_cache[1675]: oslogin_cache_refresh[1675]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:11:24.402585 google_oslogin_nss_cache[1675]: oslogin_cache_refresh[1675]: Refreshing group entry cache May 14 18:11:24.402450 oslogin_cache_refresh[1675]: Failure getting users, quitting May 14 18:11:24.402465 oslogin_cache_refresh[1675]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:11:24.402500 oslogin_cache_refresh[1675]: Refreshing group entry cache May 14 18:11:24.420098 google_oslogin_nss_cache[1675]: oslogin_cache_refresh[1675]: Failure getting groups, quitting May 14 18:11:24.420098 google_oslogin_nss_cache[1675]: oslogin_cache_refresh[1675]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:11:24.419878 oslogin_cache_refresh[1675]: Failure getting groups, quitting May 14 18:11:24.419887 oslogin_cache_refresh[1675]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:11:24.421202 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:11:24.421364 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:11:24.446996 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:11:24.542910 extend-filesystems[1674]: Old size kept for /dev/nvme0n1p9 May 14 18:11:24.544004 systemd-logind[1686]: New seat seat0. May 14 18:11:24.545839 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:11:24.546026 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:11:24.551798 systemd-logind[1686]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:11:24.551928 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:11:24.590725 tar[1694]: linux-amd64/LICENSE May 14 18:11:24.634049 update_engine[1688]: I20250514 18:11:24.633993 1688 main.cc:92] Flatcar Update Engine starting May 14 18:11:24.930323 tar[1694]: linux-amd64/helm May 14 18:11:24.934358 dbus-daemon[1668]: [system] SELinux support is enabled May 14 18:11:24.934499 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:11:24.940727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:11:24.940751 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:11:24.944858 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:11:24.944877 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:11:24.951276 dbus-daemon[1668]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 18:11:24.951609 systemd[1]: Started update-engine.service - Update Engine. May 14 18:11:24.956830 update_engine[1688]: I20250514 18:11:24.954656 1688 update_check_scheduler.cc:74] Next update check in 4m7s May 14 18:11:24.960760 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:11:25.105762 coreos-metadata[1667]: May 14 18:11:25.105 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 18:11:25.107724 coreos-metadata[1667]: May 14 18:11:25.107 INFO Fetch successful May 14 18:11:25.107803 coreos-metadata[1667]: May 14 18:11:25.107 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 14 18:11:25.110705 coreos-metadata[1667]: May 14 18:11:25.110 INFO Fetch successful May 14 18:11:25.111764 coreos-metadata[1667]: May 14 18:11:25.111 INFO Fetching http://168.63.129.16/machine/3001f5f1-a91f-411c-89ac-92c0a29f00eb/b88da392%2D5fc8%2D4caf%2D8def%2D612d694c1d19.%5Fci%2D4334.0.0%2Da%2Def358d086b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 14 18:11:25.112885 coreos-metadata[1667]: May 14 18:11:25.112 INFO Fetch successful May 14 18:11:25.113128 coreos-metadata[1667]: May 14 18:11:25.113 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 14 18:11:25.124788 coreos-metadata[1667]: May 14 18:11:25.124 INFO Fetch successful May 14 18:11:25.165015 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 18:11:25.169028 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:11:25.434695 bash[1717]: Updated "/home/core/.ssh/authorized_keys" May 14 18:11:25.436300 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:11:25.440268 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 18:11:25.665816 tar[1694]: linux-amd64/README.md May 14 18:11:25.678378 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:11:25.795954 sshd_keygen[1692]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:11:25.810939 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:11:25.814916 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:11:25.818083 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 14 18:11:25.827935 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:11:25.828087 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:11:25.837883 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:11:25.842776 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 14 18:11:25.859263 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:11:25.861927 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:11:25.865259 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:11:25.868918 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:11:25.912291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:11:25.917062 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:11:26.044618 locksmithd[1763]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:11:26.363625 kubelet[1802]: E0514 18:11:26.363577 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:11:26.365119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:11:26.365238 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:11:26.365594 systemd[1]: kubelet.service: Consumed 805ms CPU time, 252.1M memory peak. May 14 18:11:27.090371 containerd[1724]: time="2025-05-14T18:11:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:11:27.090870 containerd[1724]: time="2025-05-14T18:11:27.090842521Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:11:27.096675 containerd[1724]: time="2025-05-14T18:11:27.096646322Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.321µs" May 14 18:11:27.096675 containerd[1724]: time="2025-05-14T18:11:27.096669216Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:11:27.096763 containerd[1724]: time="2025-05-14T18:11:27.096702246Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:11:27.096845 containerd[1724]: time="2025-05-14T18:11:27.096832326Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:11:27.096872 containerd[1724]: time="2025-05-14T18:11:27.096845714Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:11:27.096872 containerd[1724]: time="2025-05-14T18:11:27.096863678Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:11:27.096934 containerd[1724]: time="2025-05-14T18:11:27.096906191Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:11:27.096934 containerd[1724]: time="2025-05-14T18:11:27.096929698Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:11:27.097111 containerd[1724]: time="2025-05-14T18:11:27.097096276Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:11:27.097111 containerd[1724]: time="2025-05-14T18:11:27.097106732Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:11:27.097156 containerd[1724]: time="2025-05-14T18:11:27.097114857Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:11:27.097156 containerd[1724]: time="2025-05-14T18:11:27.097121625Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:11:27.097194 containerd[1724]: time="2025-05-14T18:11:27.097168166Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:11:27.097316 containerd[1724]: time="2025-05-14T18:11:27.097303008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:11:27.097338 containerd[1724]: time="2025-05-14T18:11:27.097321901Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:11:27.097338 containerd[1724]: time="2025-05-14T18:11:27.097330029Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:11:27.097383 containerd[1724]: time="2025-05-14T18:11:27.097354152Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:11:27.097591 containerd[1724]: time="2025-05-14T18:11:27.097579310Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:11:27.097624 containerd[1724]: time="2025-05-14T18:11:27.097618433Z" level=info msg="metadata content store policy set" policy=shared May 14 18:11:27.293693 containerd[1724]: time="2025-05-14T18:11:27.293593840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:11:27.293693 containerd[1724]: time="2025-05-14T18:11:27.293663414Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:11:27.293867 containerd[1724]: time="2025-05-14T18:11:27.293814102Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:11:27.293867 containerd[1724]: time="2025-05-14T18:11:27.293831232Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:11:27.293867 containerd[1724]: time="2025-05-14T18:11:27.293842500Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:11:27.293867 containerd[1724]: time="2025-05-14T18:11:27.293852355Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.293969948Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.293985306Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.293999707Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294010689Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294019199Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294030212Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294141447Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294156353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294170815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294190083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294201756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294213250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294226760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294238166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:11:27.294378 containerd[1724]: time="2025-05-14T18:11:27.294249886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:11:27.294642 containerd[1724]: time="2025-05-14T18:11:27.294261016Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:11:27.294642 containerd[1724]: time="2025-05-14T18:11:27.294272883Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:11:27.294642 containerd[1724]: time="2025-05-14T18:11:27.294344917Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:11:27.294642 containerd[1724]: time="2025-05-14T18:11:27.294381212Z" level=info msg="Start snapshots syncer" May 14 18:11:27.294642 containerd[1724]: time="2025-05-14T18:11:27.294405905Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:11:27.294751 containerd[1724]: time="2025-05-14T18:11:27.294644332Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:11:27.294849 containerd[1724]: time="2025-05-14T18:11:27.294756484Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295723407Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295851844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295881445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295896508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295909014Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295923473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295934002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295947427Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295975979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.295989655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.296003549Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.296033902Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.296049638Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:11:27.296218 containerd[1724]: time="2025-05-14T18:11:27.296060785Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296069392Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296078662Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296090263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296103668Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296119635Z" level=info msg="runtime interface created" May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296125345Z" level=info msg="created NRI interface" May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296136333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296148623Z" level=info msg="Connect containerd service" May 14 18:11:27.296482 containerd[1724]: time="2025-05-14T18:11:27.296181083Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:11:27.297489 containerd[1724]: time="2025-05-14T18:11:27.297462941Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:11:28.341511 containerd[1724]: time="2025-05-14T18:11:28.341465713Z" level=info msg="Start subscribing containerd event" May 14 18:11:28.341819 containerd[1724]: time="2025-05-14T18:11:28.341726839Z" level=info msg="Start recovering state" May 14 18:11:28.341841 containerd[1724]: time="2025-05-14T18:11:28.341833072Z" level=info msg="Start event monitor" May 14 18:11:28.341857 containerd[1724]: time="2025-05-14T18:11:28.341845711Z" level=info msg="Start cni network conf syncer for default" May 14 18:11:28.341880 containerd[1724]: time="2025-05-14T18:11:28.341858280Z" level=info msg="Start streaming server" May 14 18:11:28.341880 containerd[1724]: time="2025-05-14T18:11:28.341867166Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:11:28.341880 containerd[1724]: time="2025-05-14T18:11:28.341874367Z" level=info msg="runtime interface starting up..." May 14 18:11:28.341923 containerd[1724]: time="2025-05-14T18:11:28.341880250Z" level=info msg="starting plugins..." May 14 18:11:28.341923 containerd[1724]: time="2025-05-14T18:11:28.341891620Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:11:28.342056 containerd[1724]: time="2025-05-14T18:11:28.341661986Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:11:28.342056 containerd[1724]: time="2025-05-14T18:11:28.341996003Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:11:28.342056 containerd[1724]: time="2025-05-14T18:11:28.342037974Z" level=info msg="containerd successfully booted in 1.251939s" May 14 18:11:28.342215 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:11:28.344534 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:11:28.346847 systemd[1]: Startup finished in 2.870s (kernel) + 21.014s (initrd) + 47.982s (userspace) = 1min 11.867s. May 14 18:11:28.691519 login[1796]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 14 18:11:28.692030 login[1795]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 14 18:11:28.697853 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:11:28.698717 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:11:28.705117 systemd-logind[1686]: New session 2 of user core. May 14 18:11:28.713469 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:11:28.715399 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:11:28.726522 (systemd)[1838]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:11:28.728123 systemd-logind[1686]: New session c1 of user core. May 14 18:11:28.850736 systemd[1838]: Queued start job for default target default.target. May 14 18:11:28.861303 systemd[1838]: Created slice app.slice - User Application Slice. May 14 18:11:28.861329 systemd[1838]: Reached target paths.target - Paths. May 14 18:11:28.861358 systemd[1838]: Reached target timers.target - Timers. May 14 18:11:28.862111 systemd[1838]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:11:28.868648 systemd[1838]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:11:28.868708 systemd[1838]: Reached target sockets.target - Sockets. May 14 18:11:28.868745 systemd[1838]: Reached target basic.target - Basic System. May 14 18:11:28.868770 systemd[1838]: Reached target default.target - Main User Target. May 14 18:11:28.868789 systemd[1838]: Startup finished in 136ms. May 14 18:11:28.868844 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:11:28.870021 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:11:29.200650 waagent[1792]: 2025-05-14T18:11:29.200589Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 14 18:11:29.202037 waagent[1792]: 2025-05-14T18:11:29.202002Z INFO Daemon Daemon OS: flatcar 4334.0.0 May 14 18:11:29.203016 waagent[1792]: 2025-05-14T18:11:29.202990Z INFO Daemon Daemon Python: 3.11.12 May 14 18:11:29.204008 waagent[1792]: 2025-05-14T18:11:29.203970Z INFO Daemon Daemon Run daemon May 14 18:11:29.204942 waagent[1792]: 2025-05-14T18:11:29.204917Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4334.0.0' May 14 18:11:29.206967 waagent[1792]: 2025-05-14T18:11:29.206517Z INFO Daemon Daemon Using waagent for provisioning May 14 18:11:29.208252 waagent[1792]: 2025-05-14T18:11:29.208230Z INFO Daemon Daemon Activate resource disk May 14 18:11:29.209547 waagent[1792]: 2025-05-14T18:11:29.209493Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 14 18:11:29.212324 waagent[1792]: 2025-05-14T18:11:29.212294Z INFO Daemon Daemon Found device: None May 14 18:11:29.213425 waagent[1792]: 2025-05-14T18:11:29.213363Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 14 18:11:29.215306 waagent[1792]: 2025-05-14T18:11:29.214517Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 14 18:11:29.218172 waagent[1792]: 2025-05-14T18:11:29.218139Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 18:11:29.219584 waagent[1792]: 2025-05-14T18:11:29.219558Z INFO Daemon Daemon Running default provisioning handler May 14 18:11:29.225143 waagent[1792]: 2025-05-14T18:11:29.224932Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 14 18:11:29.226697 waagent[1792]: 2025-05-14T18:11:29.225923Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 14 18:11:29.226697 waagent[1792]: 2025-05-14T18:11:29.226417Z INFO Daemon Daemon cloud-init is enabled: False May 14 18:11:29.226697 waagent[1792]: 2025-05-14T18:11:29.226467Z INFO Daemon Daemon Copying ovf-env.xml May 14 18:11:29.354205 waagent[1792]: 2025-05-14T18:11:29.354168Z INFO Daemon Daemon Successfully mounted dvd May 14 18:11:29.363613 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 14 18:11:29.365282 waagent[1792]: 2025-05-14T18:11:29.365237Z INFO Daemon Daemon Detect protocol endpoint May 14 18:11:29.371530 waagent[1792]: 2025-05-14T18:11:29.365768Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 18:11:29.371530 waagent[1792]: 2025-05-14T18:11:29.366300Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 14 18:11:29.371530 waagent[1792]: 2025-05-14T18:11:29.367020Z INFO Daemon Daemon Test for route to 168.63.129.16 May 14 18:11:29.371530 waagent[1792]: 2025-05-14T18:11:29.367145Z INFO Daemon Daemon Route to 168.63.129.16 exists May 14 18:11:29.371530 waagent[1792]: 2025-05-14T18:11:29.367583Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 14 18:11:29.389285 waagent[1792]: 2025-05-14T18:11:29.389253Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 14 18:11:29.391051 waagent[1792]: 2025-05-14T18:11:29.389843Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 14 18:11:29.391051 waagent[1792]: 2025-05-14T18:11:29.390041Z INFO Daemon Daemon Server preferred version:2015-04-05 May 14 18:11:29.483850 waagent[1792]: 2025-05-14T18:11:29.483790Z INFO Daemon Daemon Initializing goal state during protocol detection May 14 18:11:29.484046 waagent[1792]: 2025-05-14T18:11:29.484020Z INFO Daemon Daemon Forcing an update of the goal state. May 14 18:11:29.489231 waagent[1792]: 2025-05-14T18:11:29.489197Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 18:11:29.499652 waagent[1792]: 2025-05-14T18:11:29.499626Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 14 18:11:29.500936 waagent[1792]: 2025-05-14T18:11:29.500907Z INFO Daemon May 14 18:11:29.501627 waagent[1792]: 2025-05-14T18:11:29.501568Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 54a2873d-a316-4e49-ba2b-e9071a07fd74 eTag: 12033840969357633103 source: Fabric] May 14 18:11:29.504089 waagent[1792]: 2025-05-14T18:11:29.504061Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 14 18:11:29.505659 waagent[1792]: 2025-05-14T18:11:29.505638Z INFO Daemon May 14 18:11:29.506368 waagent[1792]: 2025-05-14T18:11:29.506310Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 14 18:11:29.512439 waagent[1792]: 2025-05-14T18:11:29.512412Z INFO Daemon Daemon Downloading artifacts profile blob May 14 18:11:29.609092 waagent[1792]: 2025-05-14T18:11:29.609051Z INFO Daemon Downloaded certificate {'thumbprint': '7EC3E2626E1E2DCBEF8DA46C72756944EFFF98FB', 'hasPrivateKey': True} May 14 18:11:29.610065 waagent[1792]: 2025-05-14T18:11:29.609734Z INFO Daemon Downloaded certificate {'thumbprint': '16EE0F0A01665052211CBF05246B30E89D158BAA', 'hasPrivateKey': False} May 14 18:11:29.612731 waagent[1792]: 2025-05-14T18:11:29.610236Z INFO Daemon Fetch goal state completed May 14 18:11:29.618176 waagent[1792]: 2025-05-14T18:11:29.618104Z INFO Daemon Daemon Starting provisioning May 14 18:11:29.619409 waagent[1792]: 2025-05-14T18:11:29.618456Z INFO Daemon Daemon Handle ovf-env.xml. May 14 18:11:29.619409 waagent[1792]: 2025-05-14T18:11:29.618671Z INFO Daemon Daemon Set hostname [ci-4334.0.0-a-ef358d086b] May 14 18:11:29.692866 login[1796]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 14 18:11:29.696477 systemd-logind[1686]: New session 1 of user core. May 14 18:11:29.705808 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:11:29.788411 waagent[1792]: 2025-05-14T18:11:29.788377Z INFO Daemon Daemon Publish hostname [ci-4334.0.0-a-ef358d086b] May 14 18:11:29.789765 waagent[1792]: 2025-05-14T18:11:29.789734Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 14 18:11:29.790982 waagent[1792]: 2025-05-14T18:11:29.790958Z INFO Daemon Daemon Primary interface is [eth0] May 14 18:11:29.797257 systemd-networkd[1285]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:11:29.797440 systemd-networkd[1285]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:11:29.797497 systemd-networkd[1285]: eth0: DHCP lease lost May 14 18:11:29.798029 waagent[1792]: 2025-05-14T18:11:29.797987Z INFO Daemon Daemon Create user account if not exists May 14 18:11:29.799440 waagent[1792]: 2025-05-14T18:11:29.799409Z INFO Daemon Daemon User core already exists, skip useradd May 14 18:11:29.800224 waagent[1792]: 2025-05-14T18:11:29.799838Z INFO Daemon Daemon Configure sudoer May 14 18:11:29.827703 systemd-networkd[1285]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 14 18:11:30.483667 waagent[1792]: 2025-05-14T18:11:30.483578Z INFO Daemon Daemon Configure sshd May 14 18:11:31.482287 waagent[1792]: 2025-05-14T18:11:31.482187Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 14 18:11:31.487165 waagent[1792]: 2025-05-14T18:11:31.484888Z INFO Daemon Daemon Deploy ssh public key. May 14 18:11:32.425662 waagent[1792]: 2025-05-14T18:11:32.425565Z INFO Daemon Daemon Provisioning complete May 14 18:11:32.435268 waagent[1792]: 2025-05-14T18:11:32.435232Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 14 18:11:32.435507 waagent[1792]: 2025-05-14T18:11:32.435483Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 14 18:11:32.436470 waagent[1792]: 2025-05-14T18:11:32.435581Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 14 18:11:32.523410 waagent[1893]: 2025-05-14T18:11:32.523350Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 14 18:11:32.523639 waagent[1893]: 2025-05-14T18:11:32.523430Z INFO ExtHandler ExtHandler OS: flatcar 4334.0.0 May 14 18:11:32.523639 waagent[1893]: 2025-05-14T18:11:32.523464Z INFO ExtHandler ExtHandler Python: 3.11.12 May 14 18:11:32.523639 waagent[1893]: 2025-05-14T18:11:32.523496Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 14 18:11:32.582504 waagent[1893]: 2025-05-14T18:11:32.582458Z INFO ExtHandler ExtHandler Distro: flatcar-4334.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 14 18:11:32.582630 waagent[1893]: 2025-05-14T18:11:32.582607Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 18:11:32.582702 waagent[1893]: 2025-05-14T18:11:32.582656Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 18:11:32.589167 waagent[1893]: 2025-05-14T18:11:32.589123Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 18:11:32.594132 waagent[1893]: 2025-05-14T18:11:32.594106Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 14 18:11:32.594428 waagent[1893]: 2025-05-14T18:11:32.594404Z INFO ExtHandler May 14 18:11:32.594465 waagent[1893]: 2025-05-14T18:11:32.594451Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 75c8b487-8a5e-4d96-acc5-e28e97c88abc eTag: 12033840969357633103 source: Fabric] May 14 18:11:32.594645 waagent[1893]: 2025-05-14T18:11:32.594627Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 14 18:11:32.594937 waagent[1893]: 2025-05-14T18:11:32.594918Z INFO ExtHandler May 14 18:11:32.594973 waagent[1893]: 2025-05-14T18:11:32.594954Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 14 18:11:32.602507 waagent[1893]: 2025-05-14T18:11:32.602479Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 14 18:11:32.980284 waagent[1893]: 2025-05-14T18:11:32.979893Z INFO ExtHandler Downloaded certificate {'thumbprint': '7EC3E2626E1E2DCBEF8DA46C72756944EFFF98FB', 'hasPrivateKey': True} May 14 18:11:32.980454 waagent[1893]: 2025-05-14T18:11:32.980400Z INFO ExtHandler Downloaded certificate {'thumbprint': '16EE0F0A01665052211CBF05246B30E89D158BAA', 'hasPrivateKey': False} May 14 18:11:32.980854 waagent[1893]: 2025-05-14T18:11:32.980824Z INFO ExtHandler Fetch goal state completed May 14 18:11:32.995500 waagent[1893]: 2025-05-14T18:11:32.995454Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 14 18:11:32.999444 waagent[1893]: 2025-05-14T18:11:32.999402Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1893 May 14 18:11:32.999545 waagent[1893]: 2025-05-14T18:11:32.999509Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 14 18:11:32.999798 waagent[1893]: 2025-05-14T18:11:32.999777Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 14 18:11:33.000755 waagent[1893]: 2025-05-14T18:11:33.000667Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4334.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 14 18:11:33.000994 waagent[1893]: 2025-05-14T18:11:33.000969Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4334.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 14 18:11:33.001098 waagent[1893]: 2025-05-14T18:11:33.001081Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 14 18:11:33.001451 waagent[1893]: 2025-05-14T18:11:33.001431Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 14 18:11:33.292004 waagent[1893]: 2025-05-14T18:11:33.291979Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 14 18:11:33.292140 waagent[1893]: 2025-05-14T18:11:33.292120Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 14 18:11:33.296944 waagent[1893]: 2025-05-14T18:11:33.296879Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 14 18:11:33.301529 systemd[1]: Reload requested from client PID 1910 ('systemctl') (unit waagent.service)... May 14 18:11:33.301540 systemd[1]: Reloading... May 14 18:11:33.363745 zram_generator::config[1943]: No configuration found. May 14 18:11:33.441838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:11:33.518599 systemd[1]: Reloading finished in 216 ms. May 14 18:11:33.533627 waagent[1893]: 2025-05-14T18:11:33.531578Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 14 18:11:33.533627 waagent[1893]: 2025-05-14T18:11:33.531660Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 14 18:11:33.697016 waagent[1893]: 2025-05-14T18:11:33.696957Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 14 18:11:33.697216 waagent[1893]: 2025-05-14T18:11:33.697194Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 14 18:11:33.697792 waagent[1893]: 2025-05-14T18:11:33.697746Z INFO ExtHandler ExtHandler Starting env monitor service. May 14 18:11:33.698160 waagent[1893]: 2025-05-14T18:11:33.698139Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 18:11:33.698214 waagent[1893]: 2025-05-14T18:11:33.698187Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 14 18:11:33.698381 waagent[1893]: 2025-05-14T18:11:33.698363Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 18:11:33.698461 waagent[1893]: 2025-05-14T18:11:33.698423Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 14 18:11:33.698784 waagent[1893]: 2025-05-14T18:11:33.698751Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 18:11:33.698830 waagent[1893]: 2025-05-14T18:11:33.698814Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 18:11:33.698919 waagent[1893]: 2025-05-14T18:11:33.698903Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 14 18:11:33.699233 waagent[1893]: 2025-05-14T18:11:33.699216Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 14 18:11:33.699287 waagent[1893]: 2025-05-14T18:11:33.699071Z INFO EnvHandler ExtHandler Configure routes May 14 18:11:33.699353 waagent[1893]: 2025-05-14T18:11:33.699332Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 14 18:11:33.699435 waagent[1893]: 2025-05-14T18:11:33.699385Z INFO EnvHandler ExtHandler Gateway:None May 14 18:11:33.699489 waagent[1893]: 2025-05-14T18:11:33.699459Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 14 18:11:33.699921 waagent[1893]: 2025-05-14T18:11:33.699904Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 14 18:11:33.700102 waagent[1893]: 2025-05-14T18:11:33.700074Z INFO EnvHandler ExtHandler Routes:None May 14 18:11:33.700389 waagent[1893]: 2025-05-14T18:11:33.700368Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 14 18:11:33.700389 waagent[1893]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 14 18:11:33.700389 waagent[1893]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 14 18:11:33.700389 waagent[1893]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 14 18:11:33.700389 waagent[1893]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 14 18:11:33.700389 waagent[1893]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 18:11:33.700389 waagent[1893]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 18:11:33.705006 waagent[1893]: 2025-05-14T18:11:33.704853Z INFO ExtHandler ExtHandler May 14 18:11:33.705006 waagent[1893]: 2025-05-14T18:11:33.704903Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 542dde70-e169-4ca0-b325-658b8b10d1be correlation b019a710-509f-4494-867c-03d97c2cb308 created: 2025-05-14T18:09:47.877686Z] May 14 18:11:33.705190 waagent[1893]: 2025-05-14T18:11:33.705164Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 14 18:11:33.707890 waagent[1893]: 2025-05-14T18:11:33.707854Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] May 14 18:11:33.713187 waagent[1893]: 2025-05-14T18:11:33.712900Z INFO MonitorHandler ExtHandler Network interfaces: May 14 18:11:33.713187 waagent[1893]: Executing ['ip', '-a', '-o', 'link']: May 14 18:11:33.713187 waagent[1893]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 14 18:11:33.713187 waagent[1893]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:77:8e:03 brd ff:ff:ff:ff:ff:ff\ alias Network Device May 14 18:11:33.713187 waagent[1893]: Executing ['ip', '-4', '-a', '-o', 'address']: May 14 18:11:33.713187 waagent[1893]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 14 18:11:33.713187 waagent[1893]: 2: eth0 inet 10.200.8.47/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 14 18:11:33.713187 waagent[1893]: Executing ['ip', '-6', '-a', '-o', 'address']: May 14 18:11:33.713187 waagent[1893]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 14 18:11:33.713187 waagent[1893]: 2: eth0 inet6 fe80::7e1e:52ff:fe77:8e03/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 18:11:33.734803 waagent[1893]: 2025-05-14T18:11:33.734766Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 14 18:11:33.734803 waagent[1893]: Try `iptables -h' or 'iptables --help' for more information.) May 14 18:11:33.735043 waagent[1893]: 2025-05-14T18:11:33.735022Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 21B98582-8C8C-4B13-A337-792FE84FF7F3;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 14 18:11:33.751961 waagent[1893]: 2025-05-14T18:11:33.751924Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 14 18:11:33.751961 waagent[1893]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 18:11:33.751961 waagent[1893]: pkts bytes target prot opt in out source destination May 14 18:11:33.751961 waagent[1893]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 18:11:33.751961 waagent[1893]: pkts bytes target prot opt in out source destination May 14 18:11:33.751961 waagent[1893]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) May 14 18:11:33.751961 waagent[1893]: pkts bytes target prot opt in out source destination May 14 18:11:33.751961 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 18:11:33.751961 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 18:11:33.751961 waagent[1893]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 18:11:33.754427 waagent[1893]: 2025-05-14T18:11:33.754387Z INFO EnvHandler ExtHandler Current Firewall rules: May 14 18:11:33.754427 waagent[1893]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 18:11:33.754427 waagent[1893]: pkts bytes target prot opt in out source destination May 14 18:11:33.754427 waagent[1893]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 18:11:33.754427 waagent[1893]: pkts bytes target prot opt in out source destination May 14 18:11:33.754427 waagent[1893]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) May 14 18:11:33.754427 waagent[1893]: pkts bytes target prot opt in out source destination May 14 18:11:33.754427 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 18:11:33.754427 waagent[1893]: 4 595 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 18:11:33.754427 waagent[1893]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 18:11:34.501984 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 14 18:11:36.517139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:11:36.518825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:11:37.844478 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:11:37.845519 systemd[1]: Started sshd@0-10.200.8.47:22-10.200.16.10:57496.service - OpenSSH per-connection server daemon (10.200.16.10:57496). May 14 18:11:39.099532 sshd[2044]: Accepted publickey for core from 10.200.16.10 port 57496 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:11:39.100613 sshd-session[2044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:11:39.104576 systemd-logind[1686]: New session 3 of user core. May 14 18:11:39.110809 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:11:39.652489 systemd[1]: Started sshd@1-10.200.8.47:22-10.200.16.10:44492.service - OpenSSH per-connection server daemon (10.200.16.10:44492). May 14 18:11:40.286197 sshd[2049]: Accepted publickey for core from 10.200.16.10 port 44492 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:11:40.287501 sshd-session[2049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:11:40.291639 systemd-logind[1686]: New session 4 of user core. May 14 18:11:40.296792 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:11:40.740849 sshd[2051]: Connection closed by 10.200.16.10 port 44492 May 14 18:11:40.741641 sshd-session[2049]: pam_unix(sshd:session): session closed for user core May 14 18:11:40.744372 systemd[1]: sshd@1-10.200.8.47:22-10.200.16.10:44492.service: Deactivated successfully. May 14 18:11:40.745899 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:11:40.747360 systemd-logind[1686]: Session 4 logged out. Waiting for processes to exit. May 14 18:11:40.748001 systemd-logind[1686]: Removed session 4. May 14 18:11:40.856230 systemd[1]: Started sshd@2-10.200.8.47:22-10.200.16.10:44494.service - OpenSSH per-connection server daemon (10.200.16.10:44494). May 14 18:11:41.494160 sshd[2057]: Accepted publickey for core from 10.200.16.10 port 44494 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:11:41.495465 sshd-session[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:11:41.499552 systemd-logind[1686]: New session 5 of user core. May 14 18:11:41.505804 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:11:41.938927 sshd[2059]: Connection closed by 10.200.16.10 port 44494 May 14 18:11:42.051143 systemd[1]: Started sshd@3-10.200.8.47:22-10.200.16.10:44502.service - OpenSSH per-connection server daemon (10.200.16.10:44502). May 14 18:11:42.389317 sshd-session[2057]: pam_unix(sshd:session): session closed for user core May 14 18:11:42.392535 systemd-logind[1686]: Session 5 logged out. Waiting for processes to exit. May 14 18:11:42.393546 systemd[1]: sshd@2-10.200.8.47:22-10.200.16.10:44494.service: Deactivated successfully. May 14 18:11:42.396380 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:11:42.398676 systemd-logind[1686]: Removed session 5. May 14 18:11:42.686384 sshd[2062]: Accepted publickey for core from 10.200.16.10 port 44502 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:11:42.687970 sshd-session[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:11:42.692110 systemd-logind[1686]: New session 6 of user core. May 14 18:11:42.693821 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:11:43.138850 sshd[2067]: Connection closed by 10.200.16.10 port 44502 May 14 18:11:43.139387 sshd-session[2062]: pam_unix(sshd:session): session closed for user core May 14 18:11:43.142093 systemd[1]: sshd@3-10.200.8.47:22-10.200.16.10:44502.service: Deactivated successfully. May 14 18:11:43.143598 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:11:43.144648 systemd-logind[1686]: Session 6 logged out. Waiting for processes to exit. May 14 18:11:43.145571 systemd-logind[1686]: Removed session 6. May 14 18:11:43.250785 systemd[1]: Started sshd@4-10.200.8.47:22-10.200.16.10:44508.service - OpenSSH per-connection server daemon (10.200.16.10:44508). May 14 18:11:43.361475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:11:43.366903 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:11:43.399953 kubelet[2080]: E0514 18:11:43.399896 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:11:43.402607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:11:43.402738 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:11:43.403009 systemd[1]: kubelet.service: Consumed 122ms CPU time, 103.8M memory peak. May 14 18:11:43.883597 sshd[2073]: Accepted publickey for core from 10.200.16.10 port 44508 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:11:43.884955 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:11:43.888747 systemd-logind[1686]: New session 7 of user core. May 14 18:11:43.894815 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:11:44.356015 sudo[2088]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:11:44.356208 sudo[2088]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:11:44.392475 sudo[2088]: pam_unix(sudo:session): session closed for user root May 14 18:11:44.494016 sshd[2087]: Connection closed by 10.200.16.10 port 44508 May 14 18:11:44.494744 sshd-session[2073]: pam_unix(sshd:session): session closed for user core May 14 18:11:44.497920 systemd[1]: sshd@4-10.200.8.47:22-10.200.16.10:44508.service: Deactivated successfully. May 14 18:11:44.499316 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:11:44.500407 systemd-logind[1686]: Session 7 logged out. Waiting for processes to exit. May 14 18:11:44.501339 systemd-logind[1686]: Removed session 7. May 14 18:11:44.604550 systemd[1]: Started sshd@5-10.200.8.47:22-10.200.16.10:44522.service - OpenSSH per-connection server daemon (10.200.16.10:44522). May 14 18:11:45.237961 sshd[2094]: Accepted publickey for core from 10.200.16.10 port 44522 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:11:45.239385 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:11:45.243467 systemd-logind[1686]: New session 8 of user core. May 14 18:11:45.249807 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:11:45.583779 sudo[2098]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:11:45.583976 sudo[2098]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:11:45.632642 sudo[2098]: pam_unix(sudo:session): session closed for user root May 14 18:11:45.636024 sudo[2097]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:11:45.636202 sudo[2097]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:11:45.642736 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:11:45.669028 augenrules[2120]: No rules May 14 18:11:45.669427 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:11:45.669571 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:11:45.670492 sudo[2097]: pam_unix(sudo:session): session closed for user root May 14 18:11:45.772817 sshd[2096]: Connection closed by 10.200.16.10 port 44522 May 14 18:11:45.773229 sshd-session[2094]: pam_unix(sshd:session): session closed for user core May 14 18:11:45.775860 systemd[1]: sshd@5-10.200.8.47:22-10.200.16.10:44522.service: Deactivated successfully. May 14 18:11:45.777539 systemd-logind[1686]: Session 8 logged out. Waiting for processes to exit. May 14 18:11:45.777635 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:11:45.778859 systemd-logind[1686]: Removed session 8. May 14 18:11:45.887473 systemd[1]: Started sshd@6-10.200.8.47:22-10.200.16.10:44536.service - OpenSSH per-connection server daemon (10.200.16.10:44536). May 14 18:11:46.524221 sshd[2129]: Accepted publickey for core from 10.200.16.10 port 44536 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:11:46.525503 sshd-session[2129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:11:46.529747 systemd-logind[1686]: New session 9 of user core. May 14 18:11:46.535816 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:11:46.869444 sudo[2132]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:11:46.869638 sudo[2132]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:11:48.138082 chronyd[1729]: Selected source PHC0 May 14 18:11:49.792473 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:11:49.800961 (dockerd)[2151]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:11:51.327967 dockerd[2151]: time="2025-05-14T18:11:51.327921950Z" level=info msg="Starting up" May 14 18:11:51.328599 dockerd[2151]: time="2025-05-14T18:11:51.328577921Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:11:51.431737 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1165117201-merged.mount: Deactivated successfully. May 14 18:11:52.439865 dockerd[2151]: time="2025-05-14T18:11:52.439601483Z" level=info msg="Loading containers: start." May 14 18:11:52.456706 kernel: Initializing XFRM netlink socket May 14 18:11:53.009629 systemd-networkd[1285]: docker0: Link UP May 14 18:11:53.392452 dockerd[2151]: time="2025-05-14T18:11:53.392392082Z" level=info msg="Loading containers: done." May 14 18:11:53.517048 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 18:11:53.518772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:11:54.830423 dockerd[2151]: time="2025-05-14T18:11:54.830014564Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:11:54.830423 dockerd[2151]: time="2025-05-14T18:11:54.830195658Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:11:54.830423 dockerd[2151]: time="2025-05-14T18:11:54.830326607Z" level=info msg="Initializing buildkit" May 14 18:11:58.754588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:11:58.762911 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:11:58.795881 kubelet[2325]: E0514 18:11:58.795846 2325 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:11:58.797324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:11:58.797446 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:11:58.797734 systemd[1]: kubelet.service: Consumed 115ms CPU time, 104.2M memory peak. May 14 18:11:59.526983 dockerd[2151]: time="2025-05-14T18:11:59.526922205Z" level=info msg="Completed buildkit initialization" May 14 18:11:59.532969 dockerd[2151]: time="2025-05-14T18:11:59.532927596Z" level=info msg="Daemon has completed initialization" May 14 18:11:59.533525 dockerd[2151]: time="2025-05-14T18:11:59.532985087Z" level=info msg="API listen on /run/docker.sock" May 14 18:11:59.533139 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:12:00.573577 containerd[1724]: time="2025-05-14T18:12:00.573531113Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 18:12:04.703648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033298800.mount: Deactivated successfully. May 14 18:12:09.017132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 18:12:09.018774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:12:10.497926 update_engine[1688]: I20250514 18:12:10.497836 1688 update_attempter.cc:509] Updating boot flags... May 14 18:12:12.759626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:12:12.764889 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:12:12.793941 kubelet[2444]: E0514 18:12:12.793911 2444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:12:12.795322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:12:12.795441 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:12:12.795742 systemd[1]: kubelet.service: Consumed 117ms CPU time, 100.9M memory peak. May 14 18:12:15.235879 containerd[1724]: time="2025-05-14T18:12:15.235818517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:15.282064 containerd[1724]: time="2025-05-14T18:12:15.282018287Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" May 14 18:12:15.284823 containerd[1724]: time="2025-05-14T18:12:15.284761591Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:15.329138 containerd[1724]: time="2025-05-14T18:12:15.329083252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:15.330265 containerd[1724]: time="2025-05-14T18:12:15.330183385Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 14.75660218s" May 14 18:12:15.330265 containerd[1724]: time="2025-05-14T18:12:15.330222638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 14 18:12:15.332461 containerd[1724]: time="2025-05-14T18:12:15.332269285Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 18:12:18.689017 containerd[1724]: time="2025-05-14T18:12:18.688956015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:18.691264 containerd[1724]: time="2025-05-14T18:12:18.691217949Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" May 14 18:12:18.736434 containerd[1724]: time="2025-05-14T18:12:18.736378171Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:18.782010 containerd[1724]: time="2025-05-14T18:12:18.781937481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:18.783157 containerd[1724]: time="2025-05-14T18:12:18.783020611Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 3.450719362s" May 14 18:12:18.783157 containerd[1724]: time="2025-05-14T18:12:18.783057586Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 14 18:12:18.783861 containerd[1724]: time="2025-05-14T18:12:18.783842150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 18:12:21.832207 containerd[1724]: time="2025-05-14T18:12:21.832157228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:21.878652 containerd[1724]: time="2025-05-14T18:12:21.878604082Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" May 14 18:12:21.881632 containerd[1724]: time="2025-05-14T18:12:21.881574789Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:21.927974 containerd[1724]: time="2025-05-14T18:12:21.927902181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:21.929327 containerd[1724]: time="2025-05-14T18:12:21.928964305Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 3.145091322s" May 14 18:12:21.929327 containerd[1724]: time="2025-05-14T18:12:21.928999952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 14 18:12:21.929869 containerd[1724]: time="2025-05-14T18:12:21.929842426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 18:12:23.017190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 18:12:23.019003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:12:27.455658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:12:27.460920 (kubelet)[2495]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:12:27.490854 kubelet[2495]: E0514 18:12:27.490812 2495 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:12:27.491749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:12:27.491869 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:12:27.492156 systemd[1]: kubelet.service: Consumed 120ms CPU time, 103.7M memory peak. May 14 18:12:29.137985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265765125.mount: Deactivated successfully. May 14 18:12:29.872161 containerd[1724]: time="2025-05-14T18:12:29.872115753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:29.874263 containerd[1724]: time="2025-05-14T18:12:29.874235771Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" May 14 18:12:29.935742 containerd[1724]: time="2025-05-14T18:12:29.935709295Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:29.983109 containerd[1724]: time="2025-05-14T18:12:29.983062181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:29.983777 containerd[1724]: time="2025-05-14T18:12:29.983738334Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 8.053860048s" May 14 18:12:29.983827 containerd[1724]: time="2025-05-14T18:12:29.983778185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 14 18:12:29.984767 containerd[1724]: time="2025-05-14T18:12:29.984703856Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 18:12:31.688913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228246829.mount: Deactivated successfully. May 14 18:12:37.517218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 18:12:37.518955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:12:40.426533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:12:40.431935 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:12:40.462043 kubelet[2562]: E0514 18:12:40.462003 2562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:12:40.463252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:12:40.463372 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:12:40.463716 systemd[1]: kubelet.service: Consumed 115ms CPU time, 104M memory peak. May 14 18:12:42.032326 containerd[1724]: time="2025-05-14T18:12:42.031471001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:42.079260 containerd[1724]: time="2025-05-14T18:12:42.079218371Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 14 18:12:42.125697 containerd[1724]: time="2025-05-14T18:12:42.125622795Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:42.131676 containerd[1724]: time="2025-05-14T18:12:42.131639961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:12:42.132557 containerd[1724]: time="2025-05-14T18:12:42.132418103Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 12.147663464s" May 14 18:12:42.132557 containerd[1724]: time="2025-05-14T18:12:42.132447640Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 14 18:12:42.133102 containerd[1724]: time="2025-05-14T18:12:42.133059106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:12:44.592909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890169512.mount: Deactivated successfully. May 14 18:12:44.774481 containerd[1724]: time="2025-05-14T18:12:44.774416697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:12:44.838130 containerd[1724]: time="2025-05-14T18:12:44.838076483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 14 18:12:44.884112 containerd[1724]: time="2025-05-14T18:12:44.884003735Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:12:44.888077 containerd[1724]: time="2025-05-14T18:12:44.888051271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:12:44.888551 containerd[1724]: time="2025-05-14T18:12:44.888527968Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.755430956s" May 14 18:12:44.888587 containerd[1724]: time="2025-05-14T18:12:44.888559304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 18:12:44.889072 containerd[1724]: time="2025-05-14T18:12:44.889054614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 18:12:46.734524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909067118.mount: Deactivated successfully. May 14 18:12:50.517215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 18:12:50.519246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:12:55.107608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:12:55.110389 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:12:55.141106 kubelet[2601]: E0514 18:12:55.141080 2601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:12:55.142377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:12:55.142498 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:12:55.142806 systemd[1]: kubelet.service: Consumed 129ms CPU time, 103.7M memory peak. May 14 18:13:00.585230 containerd[1724]: time="2025-05-14T18:13:00.585178688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:00.587850 containerd[1724]: time="2025-05-14T18:13:00.587809092Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 14 18:13:00.633414 containerd[1724]: time="2025-05-14T18:13:00.633354903Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:00.681177 containerd[1724]: time="2025-05-14T18:13:00.681103863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:00.682569 containerd[1724]: time="2025-05-14T18:13:00.682402259Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 15.793318455s" May 14 18:13:00.682569 containerd[1724]: time="2025-05-14T18:13:00.682438495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 14 18:13:02.695291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:02.695674 systemd[1]: kubelet.service: Consumed 129ms CPU time, 103.7M memory peak. May 14 18:13:02.697444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:02.718230 systemd[1]: Reload requested from client PID 2677 ('systemctl') (unit session-9.scope)... May 14 18:13:02.718243 systemd[1]: Reloading... May 14 18:13:02.792704 zram_generator::config[2725]: No configuration found. May 14 18:13:02.905235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:13:02.985028 systemd[1]: Reloading finished in 266 ms. May 14 18:13:03.924495 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:13:03.924618 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:13:03.924968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:03.926984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:09.557740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:09.563166 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:13:09.597228 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:13:09.597228 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 18:13:09.597228 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:13:09.597448 kubelet[2789]: I0514 18:13:09.597281 2789 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:13:09.977374 kubelet[2789]: I0514 18:13:09.977149 2789 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 18:13:09.977374 kubelet[2789]: I0514 18:13:09.977168 2789 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:13:09.977453 kubelet[2789]: I0514 18:13:09.977391 2789 server.go:954] "Client rotation is on, will bootstrap in background" May 14 18:13:10.000765 kubelet[2789]: E0514 18:13:10.000729 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:10.002628 kubelet[2789]: I0514 18:13:10.002004 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:13:10.011503 kubelet[2789]: I0514 18:13:10.011487 2789 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:13:10.013237 kubelet[2789]: I0514 18:13:10.013217 2789 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:13:10.014253 kubelet[2789]: I0514 18:13:10.014222 2789 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:13:10.014385 kubelet[2789]: I0514 18:13:10.014251 2789 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-ef358d086b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:13:10.014484 kubelet[2789]: I0514 18:13:10.014391 2789 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:13:10.014484 kubelet[2789]: I0514 18:13:10.014399 2789 container_manager_linux.go:304] "Creating device plugin manager" May 14 18:13:10.014519 kubelet[2789]: I0514 18:13:10.014496 2789 state_mem.go:36] "Initialized new in-memory state store" May 14 18:13:10.017024 kubelet[2789]: I0514 18:13:10.017011 2789 kubelet.go:446] "Attempting to sync node with API server" May 14 18:13:10.017074 kubelet[2789]: I0514 18:13:10.017027 2789 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:13:10.017074 kubelet[2789]: I0514 18:13:10.017046 2789 kubelet.go:352] "Adding apiserver pod source" May 14 18:13:10.017074 kubelet[2789]: I0514 18:13:10.017056 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:13:10.024014 kubelet[2789]: W0514 18:13:10.023982 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused May 14 18:13:10.024120 kubelet[2789]: E0514 18:13:10.024106 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:10.024224 kubelet[2789]: W0514 18:13:10.024206 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-ef358d086b&limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused May 14 18:13:10.024274 kubelet[2789]: E0514 18:13:10.024264 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-ef358d086b&limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:10.025738 kubelet[2789]: I0514 18:13:10.024959 2789 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:13:10.025738 kubelet[2789]: I0514 18:13:10.025424 2789 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:13:10.027250 kubelet[2789]: W0514 18:13:10.026522 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:13:10.029248 kubelet[2789]: I0514 18:13:10.029233 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 18:13:10.029313 kubelet[2789]: I0514 18:13:10.029306 2789 server.go:1287] "Started kubelet" May 14 18:13:10.029375 kubelet[2789]: I0514 18:13:10.029358 2789 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:13:10.030115 kubelet[2789]: I0514 18:13:10.030101 2789 server.go:490] "Adding debug handlers to kubelet server" May 14 18:13:10.030418 kubelet[2789]: I0514 18:13:10.030407 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:13:10.030823 kubelet[2789]: I0514 18:13:10.030785 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:13:10.030955 kubelet[2789]: I0514 18:13:10.030943 2789 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:13:10.034474 kubelet[2789]: I0514 18:13:10.034447 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:13:10.034820 kubelet[2789]: I0514 18:13:10.034810 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 18:13:10.035027 kubelet[2789]: E0514 18:13:10.035016 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-ef358d086b\" not found" May 14 18:13:10.035775 kubelet[2789]: E0514 18:13:10.035653 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-ef358d086b?timeout=10s\": dial tcp 10.200.8.47:6443: connect: connection refused" interval="200ms" May 14 18:13:10.037428 kubelet[2789]: I0514 18:13:10.037416 2789 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:13:10.037526 kubelet[2789]: I0514 18:13:10.037521 2789 reconciler.go:26] "Reconciler: start to sync state" May 14 18:13:10.038257 kubelet[2789]: E0514 18:13:10.036750 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.47:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-ef358d086b.183f77559cc77157 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-ef358d086b,UID:ci-4334.0.0-a-ef358d086b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-ef358d086b,},FirstTimestamp:2025-05-14 18:13:10.029283671 +0000 UTC m=+0.462915084,LastTimestamp:2025-05-14 18:13:10.029283671 +0000 UTC m=+0.462915084,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-ef358d086b,}" May 14 18:13:10.038623 kubelet[2789]: I0514 18:13:10.038605 2789 factory.go:221] Registration of the systemd container factory successfully May 14 18:13:10.038710 kubelet[2789]: I0514 18:13:10.038696 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:13:10.039928 kubelet[2789]: W0514 18:13:10.039900 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused May 14 18:13:10.039981 kubelet[2789]: E0514 18:13:10.039933 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:10.040034 kubelet[2789]: I0514 18:13:10.040022 2789 factory.go:221] Registration of the containerd container factory successfully May 14 18:13:10.052426 kubelet[2789]: I0514 18:13:10.052411 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 18:13:10.052426 kubelet[2789]: I0514 18:13:10.052421 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 18:13:10.052496 kubelet[2789]: I0514 18:13:10.052432 2789 state_mem.go:36] "Initialized new in-memory state store" May 14 18:13:10.135582 kubelet[2789]: E0514 18:13:10.135548 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-ef358d086b\" not found" May 14 18:13:10.235971 kubelet[2789]: E0514 18:13:10.235900 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-ef358d086b\" not found" May 14 18:13:10.236246 kubelet[2789]: E0514 18:13:10.236214 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-ef358d086b?timeout=10s\": dial tcp 10.200.8.47:6443: connect: connection refused" interval="400ms" May 14 18:13:10.337039 kubelet[2789]: E0514 18:13:10.336667 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-ef358d086b\" not found" May 14 18:13:10.437253 kubelet[2789]: E0514 18:13:10.437209 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-ef358d086b\" not found" May 14 18:13:10.481144 kubelet[2789]: I0514 18:13:10.481090 2789 policy_none.go:49] "None policy: Start" May 14 18:13:10.481144 kubelet[2789]: I0514 18:13:10.481118 2789 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 18:13:10.481144 kubelet[2789]: I0514 18:13:10.481131 2789 state_mem.go:35] "Initializing new in-memory state store" May 14 18:13:10.484631 kubelet[2789]: I0514 18:13:10.484605 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:13:10.486725 kubelet[2789]: I0514 18:13:10.486543 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:13:10.486725 kubelet[2789]: I0514 18:13:10.486561 2789 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 18:13:10.486725 kubelet[2789]: I0514 18:13:10.486580 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 18:13:10.486725 kubelet[2789]: I0514 18:13:10.486587 2789 kubelet.go:2388] "Starting kubelet main sync loop" May 14 18:13:10.486725 kubelet[2789]: E0514 18:13:10.486620 2789 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:13:10.487390 kubelet[2789]: W0514 18:13:10.487360 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused May 14 18:13:10.487489 kubelet[2789]: E0514 18:13:10.487475 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:10.528335 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:13:10.538129 kubelet[2789]: E0514 18:13:10.538111 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-ef358d086b\" not found" May 14 18:13:10.541783 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:13:10.553590 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:13:10.554723 kubelet[2789]: I0514 18:13:10.554709 2789 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:13:10.555030 kubelet[2789]: I0514 18:13:10.554844 2789 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:13:10.555030 kubelet[2789]: I0514 18:13:10.554852 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:13:10.555030 kubelet[2789]: I0514 18:13:10.554992 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:13:10.556051 kubelet[2789]: E0514 18:13:10.556039 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 18:13:10.556231 kubelet[2789]: E0514 18:13:10.556221 2789 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-ef358d086b\" not found" May 14 18:13:10.594462 systemd[1]: Created slice kubepods-burstable-podd5ac36f00ff91ddc396313e56645caec.slice - libcontainer container kubepods-burstable-podd5ac36f00ff91ddc396313e56645caec.slice. May 14 18:13:10.601195 kubelet[2789]: E0514 18:13:10.601172 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-ef358d086b\" not found" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:10.602984 systemd[1]: Created slice kubepods-burstable-pod74211868b73da6d16222a2e13eebe58d.slice - libcontainer container kubepods-burstable-pod74211868b73da6d16222a2e13eebe58d.slice. May 14 18:13:10.611528 kubelet[2789]: E0514 18:13:10.611403 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-ef358d086b\" not found" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:10.613375 systemd[1]: Created slice kubepods-burstable-pod0bd67dc2f33d6f7ce8838f90cefabf78.slice - libcontainer container kubepods-burstable-pod0bd67dc2f33d6f7ce8838f90cefabf78.slice. May 14 18:13:10.614781 kubelet[2789]: E0514 18:13:10.614765 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-ef358d086b\" not found" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:10.637192 kubelet[2789]: E0514 18:13:10.637156 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-ef358d086b?timeout=10s\": dial tcp 10.200.8.47:6443: connect: connection refused" interval="800ms" May 14 18:13:10.640355 kubelet[2789]: I0514 18:13:10.640344 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5ac36f00ff91ddc396313e56645caec-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" (UID: \"d5ac36f00ff91ddc396313e56645caec\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.640390 kubelet[2789]: I0514 18:13:10.640364 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5ac36f00ff91ddc396313e56645caec-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" (UID: \"d5ac36f00ff91ddc396313e56645caec\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.640390 kubelet[2789]: I0514 18:13:10.640382 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.640472 kubelet[2789]: I0514 18:13:10.640400 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.640472 kubelet[2789]: I0514 18:13:10.640418 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.640472 kubelet[2789]: I0514 18:13:10.640454 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5ac36f00ff91ddc396313e56645caec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" (UID: \"d5ac36f00ff91ddc396313e56645caec\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.640566 kubelet[2789]: I0514 18:13:10.640492 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.640566 kubelet[2789]: I0514 18:13:10.640508 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.640566 kubelet[2789]: I0514 18:13:10.640526 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0bd67dc2f33d6f7ce8838f90cefabf78-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-ef358d086b\" (UID: \"0bd67dc2f33d6f7ce8838f90cefabf78\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:10.656457 kubelet[2789]: I0514 18:13:10.656442 2789 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:10.656743 kubelet[2789]: E0514 18:13:10.656725 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.47:6443/api/v1/nodes\": dial tcp 10.200.8.47:6443: connect: connection refused" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:10.858322 kubelet[2789]: I0514 18:13:10.858292 2789 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:10.858647 kubelet[2789]: E0514 18:13:10.858624 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.47:6443/api/v1/nodes\": dial tcp 10.200.8.47:6443: connect: connection refused" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:10.902562 containerd[1724]: time="2025-05-14T18:13:10.902519051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-ef358d086b,Uid:d5ac36f00ff91ddc396313e56645caec,Namespace:kube-system,Attempt:0,}" May 14 18:13:10.913002 containerd[1724]: time="2025-05-14T18:13:10.912977485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-ef358d086b,Uid:74211868b73da6d16222a2e13eebe58d,Namespace:kube-system,Attempt:0,}" May 14 18:13:10.915614 containerd[1724]: time="2025-05-14T18:13:10.915594846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-ef358d086b,Uid:0bd67dc2f33d6f7ce8838f90cefabf78,Namespace:kube-system,Attempt:0,}" May 14 18:13:11.067109 kubelet[2789]: W0514 18:13:11.067047 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-ef358d086b&limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused May 14 18:13:11.067202 kubelet[2789]: E0514 18:13:11.067122 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-ef358d086b&limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:11.260886 kubelet[2789]: I0514 18:13:11.260823 2789 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:11.261745 kubelet[2789]: E0514 18:13:11.261716 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.47:6443/api/v1/nodes\": dial tcp 10.200.8.47:6443: connect: connection refused" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:11.296039 kubelet[2789]: W0514 18:13:11.295979 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused May 14 18:13:11.296118 kubelet[2789]: E0514 18:13:11.296051 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:11.342764 kubelet[2789]: W0514 18:13:11.342745 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused May 14 18:13:11.342829 kubelet[2789]: E0514 18:13:11.342773 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:11.438461 kubelet[2789]: E0514 18:13:11.438414 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-ef358d086b?timeout=10s\": dial tcp 10.200.8.47:6443: connect: connection refused" interval="1.6s" May 14 18:13:11.457938 kubelet[2789]: W0514 18:13:11.457876 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.47:6443: connect: connection refused May 14 18:13:11.458001 kubelet[2789]: E0514 18:13:11.457949 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:11.690704 containerd[1724]: time="2025-05-14T18:13:11.690599700Z" level=info msg="connecting to shim 94dfd200458e1ca703a10b7561d15d84474e286e3b2fc6b3156be887df8f6ab4" address="unix:///run/containerd/s/3078cd9d1461bb6db58c68ec01fde20ce05cb394221953b22d3165287ef22001" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:11.714840 systemd[1]: Started cri-containerd-94dfd200458e1ca703a10b7561d15d84474e286e3b2fc6b3156be887df8f6ab4.scope - libcontainer container 94dfd200458e1ca703a10b7561d15d84474e286e3b2fc6b3156be887df8f6ab4. May 14 18:13:11.828363 containerd[1724]: time="2025-05-14T18:13:11.828298527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-ef358d086b,Uid:d5ac36f00ff91ddc396313e56645caec,Namespace:kube-system,Attempt:0,} returns sandbox id \"94dfd200458e1ca703a10b7561d15d84474e286e3b2fc6b3156be887df8f6ab4\"" May 14 18:13:11.831620 containerd[1724]: time="2025-05-14T18:13:11.831600584Z" level=info msg="CreateContainer within sandbox \"94dfd200458e1ca703a10b7561d15d84474e286e3b2fc6b3156be887df8f6ab4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:13:11.831939 containerd[1724]: time="2025-05-14T18:13:11.831793855Z" level=info msg="connecting to shim 384c6081e5d58d4d9155e455b7dd424cb900d3287a1d0a80e3fde098a92d7c24" address="unix:///run/containerd/s/e6b08f6cbea466903a201dd120200469e0e4714182ed93862e2cb1a72dc82fac" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:11.849797 systemd[1]: Started cri-containerd-384c6081e5d58d4d9155e455b7dd424cb900d3287a1d0a80e3fde098a92d7c24.scope - libcontainer container 384c6081e5d58d4d9155e455b7dd424cb900d3287a1d0a80e3fde098a92d7c24. May 14 18:13:11.982706 containerd[1724]: time="2025-05-14T18:13:11.982459898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-ef358d086b,Uid:74211868b73da6d16222a2e13eebe58d,Namespace:kube-system,Attempt:0,} returns sandbox id \"384c6081e5d58d4d9155e455b7dd424cb900d3287a1d0a80e3fde098a92d7c24\"" May 14 18:13:11.984652 containerd[1724]: time="2025-05-14T18:13:11.984628603Z" level=info msg="CreateContainer within sandbox \"384c6081e5d58d4d9155e455b7dd424cb900d3287a1d0a80e3fde098a92d7c24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:13:12.036752 containerd[1724]: time="2025-05-14T18:13:12.036700273Z" level=info msg="connecting to shim 01e181feca02571df8984539a6d61b64c4c90d6d99d32a9f8940b67b0d79acb0" address="unix:///run/containerd/s/659c7cc7e99063cf14eeda546ace52c133de5d734f901444701701cea2d46e83" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:12.050887 kubelet[2789]: E0514 18:13:12.050850 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.47:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:12.053798 systemd[1]: Started cri-containerd-01e181feca02571df8984539a6d61b64c4c90d6d99d32a9f8940b67b0d79acb0.scope - libcontainer container 01e181feca02571df8984539a6d61b64c4c90d6d99d32a9f8940b67b0d79acb0. May 14 18:13:12.063533 kubelet[2789]: I0514 18:13:12.063520 2789 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:12.063940 kubelet[2789]: E0514 18:13:12.063918 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.8.47:6443/api/v1/nodes\": dial tcp 10.200.8.47:6443: connect: connection refused" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:12.234582 containerd[1724]: time="2025-05-14T18:13:12.234466485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-ef358d086b,Uid:0bd67dc2f33d6f7ce8838f90cefabf78,Namespace:kube-system,Attempt:0,} returns sandbox id \"01e181feca02571df8984539a6d61b64c4c90d6d99d32a9f8940b67b0d79acb0\"" May 14 18:13:12.236868 containerd[1724]: time="2025-05-14T18:13:12.236769975Z" level=info msg="CreateContainer within sandbox \"01e181feca02571df8984539a6d61b64c4c90d6d99d32a9f8940b67b0d79acb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:13:12.332570 containerd[1724]: time="2025-05-14T18:13:12.332540917Z" level=info msg="Container ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:12.427993 containerd[1724]: time="2025-05-14T18:13:12.427955190Z" level=info msg="Container 5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:12.772280 containerd[1724]: time="2025-05-14T18:13:12.772251508Z" level=info msg="CreateContainer within sandbox \"94dfd200458e1ca703a10b7561d15d84474e286e3b2fc6b3156be887df8f6ab4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055\"" May 14 18:13:12.772823 containerd[1724]: time="2025-05-14T18:13:12.772789955Z" level=info msg="StartContainer for \"ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055\"" May 14 18:13:12.835828 containerd[1724]: time="2025-05-14T18:13:12.835792172Z" level=info msg="connecting to shim ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055" address="unix:///run/containerd/s/3078cd9d1461bb6db58c68ec01fde20ce05cb394221953b22d3165287ef22001" protocol=ttrpc version=3 May 14 18:13:12.842485 containerd[1724]: time="2025-05-14T18:13:12.842439873Z" level=info msg="CreateContainer within sandbox \"384c6081e5d58d4d9155e455b7dd424cb900d3287a1d0a80e3fde098a92d7c24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d\"" May 14 18:13:12.842930 containerd[1724]: time="2025-05-14T18:13:12.842913047Z" level=info msg="StartContainer for \"5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d\"" May 14 18:13:12.849182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4023073025.mount: Deactivated successfully. May 14 18:13:12.852231 containerd[1724]: time="2025-05-14T18:13:12.852034658Z" level=info msg="connecting to shim 5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d" address="unix:///run/containerd/s/e6b08f6cbea466903a201dd120200469e0e4714182ed93862e2cb1a72dc82fac" protocol=ttrpc version=3 May 14 18:13:12.852321 containerd[1724]: time="2025-05-14T18:13:12.852307199Z" level=info msg="Container 9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:12.864848 systemd[1]: Started cri-containerd-ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055.scope - libcontainer container ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055. May 14 18:13:12.871860 systemd[1]: Started cri-containerd-5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d.scope - libcontainer container 5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d. May 14 18:13:12.979449 containerd[1724]: time="2025-05-14T18:13:12.979365857Z" level=info msg="StartContainer for \"ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055\" returns successfully" May 14 18:13:12.980290 containerd[1724]: time="2025-05-14T18:13:12.980269285Z" level=info msg="StartContainer for \"5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d\" returns successfully" May 14 18:13:13.029357 containerd[1724]: time="2025-05-14T18:13:13.029311502Z" level=info msg="CreateContainer within sandbox \"01e181feca02571df8984539a6d61b64c4c90d6d99d32a9f8940b67b0d79acb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b\"" May 14 18:13:13.029947 containerd[1724]: time="2025-05-14T18:13:13.029786734Z" level=info msg="StartContainer for \"9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b\"" May 14 18:13:13.031625 containerd[1724]: time="2025-05-14T18:13:13.031573227Z" level=info msg="connecting to shim 9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b" address="unix:///run/containerd/s/659c7cc7e99063cf14eeda546ace52c133de5d734f901444701701cea2d46e83" protocol=ttrpc version=3 May 14 18:13:13.054789 systemd[1]: Started cri-containerd-9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b.scope - libcontainer container 9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b. May 14 18:13:13.166754 containerd[1724]: time="2025-05-14T18:13:13.166723132Z" level=info msg="StartContainer for \"9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b\" returns successfully" May 14 18:13:13.500204 kubelet[2789]: E0514 18:13:13.500150 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-ef358d086b\" not found" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:13.504779 kubelet[2789]: E0514 18:13:13.504759 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-ef358d086b\" not found" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:13.506617 kubelet[2789]: E0514 18:13:13.505849 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-ef358d086b\" not found" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:13.665617 kubelet[2789]: I0514 18:13:13.665598 2789 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:14.337514 kubelet[2789]: E0514 18:13:14.337480 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-ef358d086b\" not found" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:14.433624 kubelet[2789]: I0514 18:13:14.433596 2789 kubelet_node_status.go:79] "Successfully registered node" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:14.433624 kubelet[2789]: E0514 18:13:14.433628 2789 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4334.0.0-a-ef358d086b\": node \"ci-4334.0.0-a-ef358d086b\" not found" May 14 18:13:14.506129 kubelet[2789]: I0514 18:13:14.506084 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.506129 kubelet[2789]: I0514 18:13:14.506101 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.506794 kubelet[2789]: I0514 18:13:14.506368 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.535741 kubelet[2789]: I0514 18:13:14.535712 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.574712 kubelet[2789]: E0514 18:13:14.574510 2789 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.574712 kubelet[2789]: E0514 18:13:14.574529 2789 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334.0.0-a-ef358d086b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.574712 kubelet[2789]: E0514 18:13:14.574510 2789 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.574712 kubelet[2789]: I0514 18:13:14.574566 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.574848 kubelet[2789]: E0514 18:13:14.574756 2789 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.576435 kubelet[2789]: E0514 18:13:14.576329 2789 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334.0.0-a-ef358d086b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.576435 kubelet[2789]: I0514 18:13:14.576349 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:14.577950 kubelet[2789]: E0514 18:13:14.577928 2789 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:15.020767 kubelet[2789]: I0514 18:13:15.020750 2789 apiserver.go:52] "Watching apiserver" May 14 18:13:15.037782 kubelet[2789]: I0514 18:13:15.037765 2789 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:13:15.507775 kubelet[2789]: I0514 18:13:15.507539 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:15.507775 kubelet[2789]: I0514 18:13:15.507601 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:15.522734 kubelet[2789]: W0514 18:13:15.522514 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:13:15.522914 kubelet[2789]: W0514 18:13:15.522902 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:13:16.509091 kubelet[2789]: I0514 18:13:16.509015 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:16.520946 kubelet[2789]: W0514 18:13:16.520904 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:13:16.521084 kubelet[2789]: E0514 18:13:16.521068 2789 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334.0.0-a-ef358d086b\" already exists" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:18.005298 systemd[1]: Reload requested from client PID 3059 ('systemctl') (unit session-9.scope)... May 14 18:13:18.005311 systemd[1]: Reloading... May 14 18:13:18.080739 zram_generator::config[3100]: No configuration found. May 14 18:13:18.158277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:13:18.245645 systemd[1]: Reloading finished in 240 ms. May 14 18:13:18.264240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:18.280007 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:13:18.280205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:18.280241 systemd[1]: kubelet.service: Consumed 752ms CPU time, 124.2M memory peak. May 14 18:13:18.281839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:21.111409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:21.115048 (kubelet)[3171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:13:21.150972 kubelet[3171]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:13:21.150972 kubelet[3171]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 18:13:21.150972 kubelet[3171]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:13:21.151188 kubelet[3171]: I0514 18:13:21.151048 3171 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:13:21.154777 kubelet[3171]: I0514 18:13:21.154753 3171 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 18:13:21.154777 kubelet[3171]: I0514 18:13:21.154773 3171 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:13:21.154969 kubelet[3171]: I0514 18:13:21.154959 3171 server.go:954] "Client rotation is on, will bootstrap in background" May 14 18:13:21.156701 kubelet[3171]: I0514 18:13:21.156526 3171 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:13:21.158812 kubelet[3171]: I0514 18:13:21.158791 3171 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:13:21.161446 kubelet[3171]: I0514 18:13:21.161433 3171 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:13:21.163303 kubelet[3171]: I0514 18:13:21.163288 3171 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:13:21.163435 kubelet[3171]: I0514 18:13:21.163416 3171 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:13:21.163555 kubelet[3171]: I0514 18:13:21.163439 3171 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-ef358d086b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:13:21.163649 kubelet[3171]: I0514 18:13:21.163555 3171 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:13:21.163649 kubelet[3171]: I0514 18:13:21.163564 3171 container_manager_linux.go:304] "Creating device plugin manager" May 14 18:13:21.163649 kubelet[3171]: I0514 18:13:21.163600 3171 state_mem.go:36] "Initialized new in-memory state store" May 14 18:13:21.163715 kubelet[3171]: I0514 18:13:21.163709 3171 kubelet.go:446] "Attempting to sync node with API server" May 14 18:13:21.163733 kubelet[3171]: I0514 18:13:21.163718 3171 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:13:21.163751 kubelet[3171]: I0514 18:13:21.163735 3171 kubelet.go:352] "Adding apiserver pod source" May 14 18:13:21.163751 kubelet[3171]: I0514 18:13:21.163743 3171 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:13:21.169009 kubelet[3171]: I0514 18:13:21.168937 3171 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:13:21.169471 kubelet[3171]: I0514 18:13:21.169444 3171 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:13:21.171702 kubelet[3171]: I0514 18:13:21.170722 3171 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 18:13:21.171702 kubelet[3171]: I0514 18:13:21.170756 3171 server.go:1287] "Started kubelet" May 14 18:13:21.171702 kubelet[3171]: I0514 18:13:21.171108 3171 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:13:21.171702 kubelet[3171]: I0514 18:13:21.171328 3171 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:13:21.171702 kubelet[3171]: I0514 18:13:21.171462 3171 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:13:21.174588 kubelet[3171]: I0514 18:13:21.174576 3171 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:13:21.176934 kubelet[3171]: I0514 18:13:21.176922 3171 server.go:490] "Adding debug handlers to kubelet server" May 14 18:13:21.178560 kubelet[3171]: I0514 18:13:21.178540 3171 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:13:21.180532 kubelet[3171]: I0514 18:13:21.180521 3171 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 18:13:21.180967 kubelet[3171]: I0514 18:13:21.180955 3171 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:13:21.181125 kubelet[3171]: I0514 18:13:21.181119 3171 reconciler.go:26] "Reconciler: start to sync state" May 14 18:13:21.183066 kubelet[3171]: I0514 18:13:21.183044 3171 factory.go:221] Registration of the systemd container factory successfully May 14 18:13:21.183127 kubelet[3171]: I0514 18:13:21.183112 3171 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:13:21.184276 kubelet[3171]: I0514 18:13:21.184266 3171 factory.go:221] Registration of the containerd container factory successfully May 14 18:13:21.186305 kubelet[3171]: I0514 18:13:21.186280 3171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:13:21.187131 kubelet[3171]: I0514 18:13:21.187111 3171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:13:21.187177 kubelet[3171]: I0514 18:13:21.187136 3171 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 18:13:21.187177 kubelet[3171]: I0514 18:13:21.187148 3171 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 18:13:21.187177 kubelet[3171]: I0514 18:13:21.187154 3171 kubelet.go:2388] "Starting kubelet main sync loop" May 14 18:13:21.187234 kubelet[3171]: E0514 18:13:21.187182 3171 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:13:21.192311 kubelet[3171]: E0514 18:13:21.192291 3171 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:13:21.227718 kubelet[3171]: I0514 18:13:21.227703 3171 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 18:13:21.227718 kubelet[3171]: I0514 18:13:21.227713 3171 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 18:13:21.227790 kubelet[3171]: I0514 18:13:21.227727 3171 state_mem.go:36] "Initialized new in-memory state store" May 14 18:13:21.227887 kubelet[3171]: I0514 18:13:21.227873 3171 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:13:21.227916 kubelet[3171]: I0514 18:13:21.227884 3171 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:13:21.227916 kubelet[3171]: I0514 18:13:21.227914 3171 policy_none.go:49] "None policy: Start" May 14 18:13:21.227952 kubelet[3171]: I0514 18:13:21.227923 3171 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 18:13:21.227952 kubelet[3171]: I0514 18:13:21.227931 3171 state_mem.go:35] "Initializing new in-memory state store" May 14 18:13:21.228030 kubelet[3171]: I0514 18:13:21.228021 3171 state_mem.go:75] "Updated machine memory state" May 14 18:13:21.235556 kubelet[3171]: I0514 18:13:21.235538 3171 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:13:21.235716 kubelet[3171]: I0514 18:13:21.235690 3171 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:13:21.235748 kubelet[3171]: I0514 18:13:21.235708 3171 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:13:21.235852 kubelet[3171]: I0514 18:13:21.235844 3171 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:13:21.236820 kubelet[3171]: E0514 18:13:21.236799 3171 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 18:13:21.288429 kubelet[3171]: I0514 18:13:21.288380 3171 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.288429 kubelet[3171]: I0514 18:13:21.288425 3171 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.288544 kubelet[3171]: I0514 18:13:21.288385 3171 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.318989 kubelet[3171]: W0514 18:13:21.318896 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:13:21.340163 kubelet[3171]: I0514 18:13:21.340135 3171 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383328 kubelet[3171]: I0514 18:13:21.382401 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5ac36f00ff91ddc396313e56645caec-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" (UID: \"d5ac36f00ff91ddc396313e56645caec\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383328 kubelet[3171]: I0514 18:13:21.382438 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5ac36f00ff91ddc396313e56645caec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" (UID: \"d5ac36f00ff91ddc396313e56645caec\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383328 kubelet[3171]: I0514 18:13:21.382456 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383328 kubelet[3171]: I0514 18:13:21.382472 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383328 kubelet[3171]: I0514 18:13:21.382485 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0bd67dc2f33d6f7ce8838f90cefabf78-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-ef358d086b\" (UID: \"0bd67dc2f33d6f7ce8838f90cefabf78\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383596 kubelet[3171]: I0514 18:13:21.382590 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5ac36f00ff91ddc396313e56645caec-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" (UID: \"d5ac36f00ff91ddc396313e56645caec\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383596 kubelet[3171]: I0514 18:13:21.382606 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383596 kubelet[3171]: I0514 18:13:21.382621 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383596 kubelet[3171]: I0514 18:13:21.382635 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74211868b73da6d16222a2e13eebe58d-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-ef358d086b\" (UID: \"74211868b73da6d16222a2e13eebe58d\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.383596 kubelet[3171]: W0514 18:13:21.383421 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:13:21.383596 kubelet[3171]: E0514 18:13:21.383456 3171 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334.0.0-a-ef358d086b\" already exists" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.385154 kubelet[3171]: W0514 18:13:21.384875 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:13:21.385154 kubelet[3171]: E0514 18:13:21.384900 3171 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:21.473341 kubelet[3171]: I0514 18:13:21.473319 3171 kubelet_node_status.go:125] "Node was previously registered" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:21.473399 kubelet[3171]: I0514 18:13:21.473382 3171 kubelet_node_status.go:79] "Successfully registered node" node="ci-4334.0.0-a-ef358d086b" May 14 18:13:22.170014 kubelet[3171]: I0514 18:13:22.169820 3171 apiserver.go:52] "Watching apiserver" May 14 18:13:22.181480 kubelet[3171]: I0514 18:13:22.181462 3171 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:13:22.215323 kubelet[3171]: I0514 18:13:22.214981 3171 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:22.223416 kubelet[3171]: W0514 18:13:22.223395 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:13:22.223470 kubelet[3171]: E0514 18:13:22.223440 3171 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334.0.0-a-ef358d086b\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" May 14 18:13:22.280901 kubelet[3171]: I0514 18:13:22.280862 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-ef358d086b" podStartSLOduration=1.28084991 podStartE2EDuration="1.28084991s" podCreationTimestamp="2025-05-14 18:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:13:22.279798171 +0000 UTC m=+1.160913607" watchObservedRunningTime="2025-05-14 18:13:22.28084991 +0000 UTC m=+1.161965344" May 14 18:13:22.374075 kubelet[3171]: I0514 18:13:22.373971 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-ef358d086b" podStartSLOduration=7.373959595 podStartE2EDuration="7.373959595s" podCreationTimestamp="2025-05-14 18:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:13:22.330501429 +0000 UTC m=+1.211616862" watchObservedRunningTime="2025-05-14 18:13:22.373959595 +0000 UTC m=+1.255075027" May 14 18:13:22.374293 kubelet[3171]: I0514 18:13:22.374220 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-ef358d086b" podStartSLOduration=7.374210909 podStartE2EDuration="7.374210909s" podCreationTimestamp="2025-05-14 18:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:13:22.37393524 +0000 UTC m=+1.255050673" watchObservedRunningTime="2025-05-14 18:13:22.374210909 +0000 UTC m=+1.255326343" May 14 18:13:23.255882 kubelet[3171]: I0514 18:13:23.255801 3171 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:13:23.256557 containerd[1724]: time="2025-05-14T18:13:23.256508622Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:13:23.257080 kubelet[3171]: I0514 18:13:23.256916 3171 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:13:24.135530 systemd[1]: Created slice kubepods-besteffort-podc400b6a3_2eab_4061_9e6d_a6b71d78c6d0.slice - libcontainer container kubepods-besteffort-podc400b6a3_2eab_4061_9e6d_a6b71d78c6d0.slice. May 14 18:13:24.199472 kubelet[3171]: I0514 18:13:24.199363 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljstm\" (UniqueName: \"kubernetes.io/projected/c400b6a3-2eab-4061-9e6d-a6b71d78c6d0-kube-api-access-ljstm\") pod \"kube-proxy-zffsv\" (UID: \"c400b6a3-2eab-4061-9e6d-a6b71d78c6d0\") " pod="kube-system/kube-proxy-zffsv" May 14 18:13:24.200911 kubelet[3171]: I0514 18:13:24.200886 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c400b6a3-2eab-4061-9e6d-a6b71d78c6d0-kube-proxy\") pod \"kube-proxy-zffsv\" (UID: \"c400b6a3-2eab-4061-9e6d-a6b71d78c6d0\") " pod="kube-system/kube-proxy-zffsv" May 14 18:13:24.201041 kubelet[3171]: I0514 18:13:24.201032 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c400b6a3-2eab-4061-9e6d-a6b71d78c6d0-xtables-lock\") pod \"kube-proxy-zffsv\" (UID: \"c400b6a3-2eab-4061-9e6d-a6b71d78c6d0\") " pod="kube-system/kube-proxy-zffsv" May 14 18:13:24.201133 kubelet[3171]: I0514 18:13:24.201124 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c400b6a3-2eab-4061-9e6d-a6b71d78c6d0-lib-modules\") pod \"kube-proxy-zffsv\" (UID: \"c400b6a3-2eab-4061-9e6d-a6b71d78c6d0\") " pod="kube-system/kube-proxy-zffsv" May 14 18:13:24.443091 containerd[1724]: time="2025-05-14T18:13:24.443012178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zffsv,Uid:c400b6a3-2eab-4061-9e6d-a6b71d78c6d0,Namespace:kube-system,Attempt:0,}" May 14 18:13:24.745100 containerd[1724]: time="2025-05-14T18:13:24.745003151Z" level=info msg="connecting to shim 4824c80ec767c45161bb54d0c897fa6639bde09b461a36da96b1b58a87447034" address="unix:///run/containerd/s/6b9f4f7f08c86b9c9422d37c34075958310f1c58cb19b3f371a67fc64c5124ca" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:24.775889 systemd[1]: Started cri-containerd-4824c80ec767c45161bb54d0c897fa6639bde09b461a36da96b1b58a87447034.scope - libcontainer container 4824c80ec767c45161bb54d0c897fa6639bde09b461a36da96b1b58a87447034. May 14 18:13:24.803701 containerd[1724]: time="2025-05-14T18:13:24.803203110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zffsv,Uid:c400b6a3-2eab-4061-9e6d-a6b71d78c6d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4824c80ec767c45161bb54d0c897fa6639bde09b461a36da96b1b58a87447034\"" May 14 18:13:24.806193 containerd[1724]: time="2025-05-14T18:13:24.806153210Z" level=info msg="CreateContainer within sandbox \"4824c80ec767c45161bb54d0c897fa6639bde09b461a36da96b1b58a87447034\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:13:24.886018 systemd[1]: Created slice kubepods-besteffort-pod3afb4f7f_64a2_4a36_a016_dccae8a56d4a.slice - libcontainer container kubepods-besteffort-pod3afb4f7f_64a2_4a36_a016_dccae8a56d4a.slice. May 14 18:13:24.905730 kubelet[3171]: I0514 18:13:24.905707 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3afb4f7f-64a2-4a36-a016-dccae8a56d4a-var-lib-calico\") pod \"tigera-operator-789496d6f5-fxbdf\" (UID: \"3afb4f7f-64a2-4a36-a016-dccae8a56d4a\") " pod="tigera-operator/tigera-operator-789496d6f5-fxbdf" May 14 18:13:24.905945 kubelet[3171]: I0514 18:13:24.905739 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksd7\" (UniqueName: \"kubernetes.io/projected/3afb4f7f-64a2-4a36-a016-dccae8a56d4a-kube-api-access-9ksd7\") pod \"tigera-operator-789496d6f5-fxbdf\" (UID: \"3afb4f7f-64a2-4a36-a016-dccae8a56d4a\") " pod="tigera-operator/tigera-operator-789496d6f5-fxbdf" May 14 18:13:25.086277 containerd[1724]: time="2025-05-14T18:13:25.085624175Z" level=info msg="Container a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:25.189170 containerd[1724]: time="2025-05-14T18:13:25.189142050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-fxbdf,Uid:3afb4f7f-64a2-4a36-a016-dccae8a56d4a,Namespace:tigera-operator,Attempt:0,}" May 14 18:13:25.285240 containerd[1724]: time="2025-05-14T18:13:25.285212673Z" level=info msg="CreateContainer within sandbox \"4824c80ec767c45161bb54d0c897fa6639bde09b461a36da96b1b58a87447034\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e\"" May 14 18:13:25.286851 containerd[1724]: time="2025-05-14T18:13:25.286743656Z" level=info msg="StartContainer for \"a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e\"" May 14 18:13:25.288927 containerd[1724]: time="2025-05-14T18:13:25.288845635Z" level=info msg="connecting to shim a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e" address="unix:///run/containerd/s/6b9f4f7f08c86b9c9422d37c34075958310f1c58cb19b3f371a67fc64c5124ca" protocol=ttrpc version=3 May 14 18:13:25.306908 systemd[1]: Started cri-containerd-a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e.scope - libcontainer container a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e. May 14 18:13:25.586544 containerd[1724]: time="2025-05-14T18:13:25.586520496Z" level=info msg="StartContainer for \"a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e\" returns successfully" May 14 18:13:26.238370 containerd[1724]: time="2025-05-14T18:13:26.238272361Z" level=info msg="connecting to shim 940ece7ba8f24d26882df59a666462c63a4291ce0ff8d1c3bac7acd7f85700c5" address="unix:///run/containerd/s/93252dfe795aec21ec59bd381428304ec97a1224c4a11813eb52215bed4dcf25" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:26.258827 systemd[1]: Started cri-containerd-940ece7ba8f24d26882df59a666462c63a4291ce0ff8d1c3bac7acd7f85700c5.scope - libcontainer container 940ece7ba8f24d26882df59a666462c63a4291ce0ff8d1c3bac7acd7f85700c5. May 14 18:13:26.292018 containerd[1724]: time="2025-05-14T18:13:26.291992566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-fxbdf,Uid:3afb4f7f-64a2-4a36-a016-dccae8a56d4a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"940ece7ba8f24d26882df59a666462c63a4291ce0ff8d1c3bac7acd7f85700c5\"" May 14 18:13:26.293157 containerd[1724]: time="2025-05-14T18:13:26.293137981Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 18:13:27.436805 sudo[2132]: pam_unix(sudo:session): session closed for user root May 14 18:13:27.538566 sshd[2131]: Connection closed by 10.200.16.10 port 44536 May 14 18:13:27.538959 sshd-session[2129]: pam_unix(sshd:session): session closed for user core May 14 18:13:27.541833 systemd[1]: sshd@6-10.200.8.47:22-10.200.16.10:44536.service: Deactivated successfully. May 14 18:13:27.543622 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:13:27.543817 systemd[1]: session-9.scope: Consumed 2.837s CPU time, 227.9M memory peak. May 14 18:13:27.544857 systemd-logind[1686]: Session 9 logged out. Waiting for processes to exit. May 14 18:13:27.545941 systemd-logind[1686]: Removed session 9. May 14 18:13:29.477212 kubelet[3171]: I0514 18:13:29.476809 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zffsv" podStartSLOduration=6.4767890040000005 podStartE2EDuration="6.476789004s" podCreationTimestamp="2025-05-14 18:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:13:26.231443071 +0000 UTC m=+5.112558505" watchObservedRunningTime="2025-05-14 18:13:29.476789004 +0000 UTC m=+8.357904456" May 14 18:13:30.151036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989763347.mount: Deactivated successfully. May 14 18:13:33.977974 containerd[1724]: time="2025-05-14T18:13:33.977933954Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:34.072184 containerd[1724]: time="2025-05-14T18:13:34.072117860Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 18:13:34.084708 containerd[1724]: time="2025-05-14T18:13:34.084647629Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:34.135141 containerd[1724]: time="2025-05-14T18:13:34.135069348Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:34.135876 containerd[1724]: time="2025-05-14T18:13:34.135836404Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 7.842668313s" May 14 18:13:34.135930 containerd[1724]: time="2025-05-14T18:13:34.135883643Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 18:13:34.138222 containerd[1724]: time="2025-05-14T18:13:34.137889572Z" level=info msg="CreateContainer within sandbox \"940ece7ba8f24d26882df59a666462c63a4291ce0ff8d1c3bac7acd7f85700c5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 18:13:34.285863 containerd[1724]: time="2025-05-14T18:13:34.285370171Z" level=info msg="Container 1f3497f42c184ee5a04d6885e37a60e7af7df576114f860914307ae1167ef8d8: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:34.435270 containerd[1724]: time="2025-05-14T18:13:34.435246904Z" level=info msg="CreateContainer within sandbox \"940ece7ba8f24d26882df59a666462c63a4291ce0ff8d1c3bac7acd7f85700c5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1f3497f42c184ee5a04d6885e37a60e7af7df576114f860914307ae1167ef8d8\"" May 14 18:13:34.435755 containerd[1724]: time="2025-05-14T18:13:34.435660431Z" level=info msg="StartContainer for \"1f3497f42c184ee5a04d6885e37a60e7af7df576114f860914307ae1167ef8d8\"" May 14 18:13:34.436613 containerd[1724]: time="2025-05-14T18:13:34.436535564Z" level=info msg="connecting to shim 1f3497f42c184ee5a04d6885e37a60e7af7df576114f860914307ae1167ef8d8" address="unix:///run/containerd/s/93252dfe795aec21ec59bd381428304ec97a1224c4a11813eb52215bed4dcf25" protocol=ttrpc version=3 May 14 18:13:34.457798 systemd[1]: Started cri-containerd-1f3497f42c184ee5a04d6885e37a60e7af7df576114f860914307ae1167ef8d8.scope - libcontainer container 1f3497f42c184ee5a04d6885e37a60e7af7df576114f860914307ae1167ef8d8. May 14 18:13:34.482943 containerd[1724]: time="2025-05-14T18:13:34.482917539Z" level=info msg="StartContainer for \"1f3497f42c184ee5a04d6885e37a60e7af7df576114f860914307ae1167ef8d8\" returns successfully" May 14 18:13:38.479317 kubelet[3171]: I0514 18:13:38.479261 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-fxbdf" podStartSLOduration=6.635263929 podStartE2EDuration="14.47924255s" podCreationTimestamp="2025-05-14 18:13:24 +0000 UTC" firstStartedPulling="2025-05-14 18:13:26.292765032 +0000 UTC m=+5.173880460" lastFinishedPulling="2025-05-14 18:13:34.136743636 +0000 UTC m=+13.017859081" observedRunningTime="2025-05-14 18:13:35.245729625 +0000 UTC m=+14.126845057" watchObservedRunningTime="2025-05-14 18:13:38.47924255 +0000 UTC m=+17.360357990" May 14 18:13:38.489259 systemd[1]: Created slice kubepods-besteffort-pode4ae8b70_a163_4b5d_90b8_72f5a85abfbc.slice - libcontainer container kubepods-besteffort-pode4ae8b70_a163_4b5d_90b8_72f5a85abfbc.slice. May 14 18:13:38.595867 kubelet[3171]: I0514 18:13:38.595820 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4ae8b70-a163-4b5d-90b8-72f5a85abfbc-tigera-ca-bundle\") pod \"calico-typha-6cbb5997bd-5rz9x\" (UID: \"e4ae8b70-a163-4b5d-90b8-72f5a85abfbc\") " pod="calico-system/calico-typha-6cbb5997bd-5rz9x" May 14 18:13:38.595943 kubelet[3171]: I0514 18:13:38.595872 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb7q6\" (UniqueName: \"kubernetes.io/projected/e4ae8b70-a163-4b5d-90b8-72f5a85abfbc-kube-api-access-zb7q6\") pod \"calico-typha-6cbb5997bd-5rz9x\" (UID: \"e4ae8b70-a163-4b5d-90b8-72f5a85abfbc\") " pod="calico-system/calico-typha-6cbb5997bd-5rz9x" May 14 18:13:38.595943 kubelet[3171]: I0514 18:13:38.595889 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e4ae8b70-a163-4b5d-90b8-72f5a85abfbc-typha-certs\") pod \"calico-typha-6cbb5997bd-5rz9x\" (UID: \"e4ae8b70-a163-4b5d-90b8-72f5a85abfbc\") " pod="calico-system/calico-typha-6cbb5997bd-5rz9x" May 14 18:13:38.792534 containerd[1724]: time="2025-05-14T18:13:38.792500981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cbb5997bd-5rz9x,Uid:e4ae8b70-a163-4b5d-90b8-72f5a85abfbc,Namespace:calico-system,Attempt:0,}" May 14 18:13:38.840840 systemd[1]: Created slice kubepods-besteffort-pod5a141316_5dcb_47ca_9c32_2e73ecdeb347.slice - libcontainer container kubepods-besteffort-pod5a141316_5dcb_47ca_9c32_2e73ecdeb347.slice. May 14 18:13:38.897585 kubelet[3171]: I0514 18:13:38.897554 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-lib-modules\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897585 kubelet[3171]: I0514 18:13:38.897583 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-xtables-lock\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897672 kubelet[3171]: I0514 18:13:38.897598 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-bin-dir\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897672 kubelet[3171]: I0514 18:13:38.897611 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-net-dir\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897672 kubelet[3171]: I0514 18:13:38.897626 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-var-lib-calico\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897672 kubelet[3171]: I0514 18:13:38.897639 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-log-dir\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897672 kubelet[3171]: I0514 18:13:38.897654 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-flexvol-driver-host\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897774 kubelet[3171]: I0514 18:13:38.897671 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a141316-5dcb-47ca-9c32-2e73ecdeb347-tigera-ca-bundle\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897774 kubelet[3171]: I0514 18:13:38.897697 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5a141316-5dcb-47ca-9c32-2e73ecdeb347-node-certs\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897774 kubelet[3171]: I0514 18:13:38.897712 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-var-run-calico\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897774 kubelet[3171]: I0514 18:13:38.897729 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvvfv\" (UniqueName: \"kubernetes.io/projected/5a141316-5dcb-47ca-9c32-2e73ecdeb347-kube-api-access-nvvfv\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.897774 kubelet[3171]: I0514 18:13:38.897745 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-policysync\") pod \"calico-node-wztbc\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " pod="calico-system/calico-node-wztbc" May 14 18:13:38.999884 kubelet[3171]: E0514 18:13:38.999850 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:38.999884 kubelet[3171]: W0514 18:13:38.999867 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.000028 kubelet[3171]: E0514 18:13:38.999993 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.000190 kubelet[3171]: E0514 18:13:39.000170 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.000190 kubelet[3171]: W0514 18:13:39.000179 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.000360 kubelet[3171]: E0514 18:13:39.000350 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.000647 kubelet[3171]: E0514 18:13:39.000623 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.000647 kubelet[3171]: W0514 18:13:39.000635 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.000945 kubelet[3171]: E0514 18:13:39.000927 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.001296 kubelet[3171]: E0514 18:13:39.001261 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.001296 kubelet[3171]: W0514 18:13:39.001283 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.001401 kubelet[3171]: E0514 18:13:39.001369 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.001804 kubelet[3171]: E0514 18:13:39.001796 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.003635 kubelet[3171]: W0514 18:13:39.001846 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.003760 kubelet[3171]: E0514 18:13:39.003712 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.003899 kubelet[3171]: E0514 18:13:39.003890 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.003941 kubelet[3171]: W0514 18:13:39.003933 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.004048 kubelet[3171]: E0514 18:13:39.004033 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.004120 kubelet[3171]: E0514 18:13:39.004115 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.004153 kubelet[3171]: W0514 18:13:39.004148 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.004193 kubelet[3171]: E0514 18:13:39.004187 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.004325 kubelet[3171]: E0514 18:13:39.004319 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.004359 kubelet[3171]: W0514 18:13:39.004354 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.004400 kubelet[3171]: E0514 18:13:39.004395 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.004669 kubelet[3171]: E0514 18:13:39.004640 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.004669 kubelet[3171]: W0514 18:13:39.004649 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.004669 kubelet[3171]: E0514 18:13:39.004657 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.042998 kubelet[3171]: E0514 18:13:39.042951 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.043251 kubelet[3171]: W0514 18:13:39.043046 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.043251 kubelet[3171]: E0514 18:13:39.043061 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.085690 containerd[1724]: time="2025-05-14T18:13:39.085636379Z" level=info msg="connecting to shim 9d7973c7f7cfbe39540bf02371ff63a2f4f693ee730fcbe680b8a7f42e92e449" address="unix:///run/containerd/s/13f88f6cba4b3b7737b54f80f4f97e5af83b75db8b2276c3e2918ea061d99309" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:39.105835 systemd[1]: Started cri-containerd-9d7973c7f7cfbe39540bf02371ff63a2f4f693ee730fcbe680b8a7f42e92e449.scope - libcontainer container 9d7973c7f7cfbe39540bf02371ff63a2f4f693ee730fcbe680b8a7f42e92e449. May 14 18:13:39.143352 containerd[1724]: time="2025-05-14T18:13:39.143324800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wztbc,Uid:5a141316-5dcb-47ca-9c32-2e73ecdeb347,Namespace:calico-system,Attempt:0,}" May 14 18:13:39.145153 containerd[1724]: time="2025-05-14T18:13:39.145110471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cbb5997bd-5rz9x,Uid:e4ae8b70-a163-4b5d-90b8-72f5a85abfbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d7973c7f7cfbe39540bf02371ff63a2f4f693ee730fcbe680b8a7f42e92e449\"" May 14 18:13:39.146248 containerd[1724]: time="2025-05-14T18:13:39.146143166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 18:13:39.172136 kubelet[3171]: E0514 18:13:39.172087 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:39.178331 kubelet[3171]: E0514 18:13:39.178314 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.178331 kubelet[3171]: W0514 18:13:39.178331 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.178412 kubelet[3171]: E0514 18:13:39.178344 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.178704 kubelet[3171]: E0514 18:13:39.178671 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.178755 kubelet[3171]: W0514 18:13:39.178708 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.178755 kubelet[3171]: E0514 18:13:39.178721 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.178938 kubelet[3171]: E0514 18:13:39.178920 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.178938 kubelet[3171]: W0514 18:13:39.178933 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.178985 kubelet[3171]: E0514 18:13:39.178942 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.179315 kubelet[3171]: E0514 18:13:39.179300 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.179315 kubelet[3171]: W0514 18:13:39.179315 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.179405 kubelet[3171]: E0514 18:13:39.179326 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.179823 kubelet[3171]: E0514 18:13:39.179802 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.179823 kubelet[3171]: W0514 18:13:39.179818 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.179890 kubelet[3171]: E0514 18:13:39.179832 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.180079 kubelet[3171]: E0514 18:13:39.179932 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.180079 kubelet[3171]: W0514 18:13:39.179938 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.180079 kubelet[3171]: E0514 18:13:39.179945 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.180079 kubelet[3171]: E0514 18:13:39.180027 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.180079 kubelet[3171]: W0514 18:13:39.180032 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.180079 kubelet[3171]: E0514 18:13:39.180038 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.180238 kubelet[3171]: E0514 18:13:39.180220 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.180238 kubelet[3171]: W0514 18:13:39.180230 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.180276 kubelet[3171]: E0514 18:13:39.180238 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.180496 kubelet[3171]: E0514 18:13:39.180485 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.180496 kubelet[3171]: W0514 18:13:39.180497 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.180668 kubelet[3171]: E0514 18:13:39.180505 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.181475 kubelet[3171]: E0514 18:13:39.181458 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.181475 kubelet[3171]: W0514 18:13:39.181475 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.181650 kubelet[3171]: E0514 18:13:39.181488 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.181650 kubelet[3171]: E0514 18:13:39.181599 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.181650 kubelet[3171]: W0514 18:13:39.181607 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.181650 kubelet[3171]: E0514 18:13:39.181614 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.181983 kubelet[3171]: E0514 18:13:39.181716 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.181983 kubelet[3171]: W0514 18:13:39.181721 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.181983 kubelet[3171]: E0514 18:13:39.181728 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.181983 kubelet[3171]: E0514 18:13:39.181816 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.181983 kubelet[3171]: W0514 18:13:39.181820 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.181983 kubelet[3171]: E0514 18:13:39.181826 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.181983 kubelet[3171]: E0514 18:13:39.181900 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.181983 kubelet[3171]: W0514 18:13:39.181904 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.181983 kubelet[3171]: E0514 18:13:39.181909 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.181983 kubelet[3171]: E0514 18:13:39.181975 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.182573 kubelet[3171]: W0514 18:13:39.181979 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.182573 kubelet[3171]: E0514 18:13:39.181984 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.182573 kubelet[3171]: E0514 18:13:39.182054 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.182573 kubelet[3171]: W0514 18:13:39.182057 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.182573 kubelet[3171]: E0514 18:13:39.182063 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.182573 kubelet[3171]: E0514 18:13:39.182141 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.182573 kubelet[3171]: W0514 18:13:39.182146 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.182573 kubelet[3171]: E0514 18:13:39.182159 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.182573 kubelet[3171]: E0514 18:13:39.182232 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.182573 kubelet[3171]: W0514 18:13:39.182237 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.182927 kubelet[3171]: E0514 18:13:39.182242 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.182927 kubelet[3171]: E0514 18:13:39.182308 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.182927 kubelet[3171]: W0514 18:13:39.182311 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.182927 kubelet[3171]: E0514 18:13:39.182316 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.182927 kubelet[3171]: E0514 18:13:39.182462 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.182927 kubelet[3171]: W0514 18:13:39.182467 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.182927 kubelet[3171]: E0514 18:13:39.182473 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.200109 kubelet[3171]: E0514 18:13:39.200094 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.200109 kubelet[3171]: W0514 18:13:39.200109 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.200193 kubelet[3171]: E0514 18:13:39.200119 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.200193 kubelet[3171]: I0514 18:13:39.200146 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e55bc54-125f-45b4-a2a9-22003bd5a2c5-varrun\") pod \"csi-node-driver-p469r\" (UID: \"5e55bc54-125f-45b4-a2a9-22003bd5a2c5\") " pod="calico-system/csi-node-driver-p469r" May 14 18:13:39.200284 kubelet[3171]: E0514 18:13:39.200273 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.200284 kubelet[3171]: W0514 18:13:39.200282 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.200335 kubelet[3171]: E0514 18:13:39.200294 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.200335 kubelet[3171]: I0514 18:13:39.200309 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e55bc54-125f-45b4-a2a9-22003bd5a2c5-registration-dir\") pod \"csi-node-driver-p469r\" (UID: \"5e55bc54-125f-45b4-a2a9-22003bd5a2c5\") " pod="calico-system/csi-node-driver-p469r" May 14 18:13:39.200435 kubelet[3171]: E0514 18:13:39.200426 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.200435 kubelet[3171]: W0514 18:13:39.200434 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.200482 kubelet[3171]: E0514 18:13:39.200446 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.200582 kubelet[3171]: E0514 18:13:39.200563 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.200582 kubelet[3171]: W0514 18:13:39.200571 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.200663 kubelet[3171]: E0514 18:13:39.200586 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.200713 kubelet[3171]: E0514 18:13:39.200704 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.200713 kubelet[3171]: W0514 18:13:39.200709 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.200785 kubelet[3171]: E0514 18:13:39.200718 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.200785 kubelet[3171]: I0514 18:13:39.200732 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn9tw\" (UniqueName: \"kubernetes.io/projected/5e55bc54-125f-45b4-a2a9-22003bd5a2c5-kube-api-access-kn9tw\") pod \"csi-node-driver-p469r\" (UID: \"5e55bc54-125f-45b4-a2a9-22003bd5a2c5\") " pod="calico-system/csi-node-driver-p469r" May 14 18:13:39.200902 kubelet[3171]: E0514 18:13:39.200825 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.200902 kubelet[3171]: W0514 18:13:39.200831 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.200902 kubelet[3171]: E0514 18:13:39.200847 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.200902 kubelet[3171]: I0514 18:13:39.200860 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e55bc54-125f-45b4-a2a9-22003bd5a2c5-kubelet-dir\") pod \"csi-node-driver-p469r\" (UID: \"5e55bc54-125f-45b4-a2a9-22003bd5a2c5\") " pod="calico-system/csi-node-driver-p469r" May 14 18:13:39.201028 kubelet[3171]: E0514 18:13:39.200929 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.201028 kubelet[3171]: W0514 18:13:39.200934 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.201028 kubelet[3171]: E0514 18:13:39.200944 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.201098 kubelet[3171]: E0514 18:13:39.201046 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.201098 kubelet[3171]: W0514 18:13:39.201052 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.201098 kubelet[3171]: E0514 18:13:39.201061 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.201269 kubelet[3171]: E0514 18:13:39.201154 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.201269 kubelet[3171]: W0514 18:13:39.201158 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.201269 kubelet[3171]: E0514 18:13:39.201180 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.201756 kubelet[3171]: E0514 18:13:39.201741 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.201756 kubelet[3171]: W0514 18:13:39.201757 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.201828 kubelet[3171]: E0514 18:13:39.201775 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.201944 kubelet[3171]: E0514 18:13:39.201901 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.201944 kubelet[3171]: W0514 18:13:39.201907 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.201944 kubelet[3171]: E0514 18:13:39.201921 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.201944 kubelet[3171]: I0514 18:13:39.201938 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e55bc54-125f-45b4-a2a9-22003bd5a2c5-socket-dir\") pod \"csi-node-driver-p469r\" (UID: \"5e55bc54-125f-45b4-a2a9-22003bd5a2c5\") " pod="calico-system/csi-node-driver-p469r" May 14 18:13:39.202182 kubelet[3171]: E0514 18:13:39.202052 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.202182 kubelet[3171]: W0514 18:13:39.202058 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.202272 kubelet[3171]: E0514 18:13:39.202203 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.202413 kubelet[3171]: E0514 18:13:39.202405 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.202455 kubelet[3171]: W0514 18:13:39.202448 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.202555 kubelet[3171]: E0514 18:13:39.202475 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.202615 kubelet[3171]: E0514 18:13:39.202610 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.202646 kubelet[3171]: W0514 18:13:39.202640 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.202744 kubelet[3171]: E0514 18:13:39.202663 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.202869 kubelet[3171]: E0514 18:13:39.202861 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.202894 kubelet[3171]: W0514 18:13:39.202870 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.202894 kubelet[3171]: E0514 18:13:39.202879 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.303093 kubelet[3171]: E0514 18:13:39.303050 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.303093 kubelet[3171]: W0514 18:13:39.303061 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.303093 kubelet[3171]: E0514 18:13:39.303073 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.303337 kubelet[3171]: E0514 18:13:39.303318 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.303337 kubelet[3171]: W0514 18:13:39.303326 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.303462 kubelet[3171]: E0514 18:13:39.303379 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.303632 kubelet[3171]: E0514 18:13:39.303579 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.303632 kubelet[3171]: W0514 18:13:39.303588 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.303632 kubelet[3171]: E0514 18:13:39.303600 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.303876 kubelet[3171]: E0514 18:13:39.303818 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.303876 kubelet[3171]: W0514 18:13:39.303825 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.303876 kubelet[3171]: E0514 18:13:39.303837 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.304173 kubelet[3171]: E0514 18:13:39.304130 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.304262 kubelet[3171]: W0514 18:13:39.304234 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.304466 kubelet[3171]: E0514 18:13:39.304308 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.304579 kubelet[3171]: E0514 18:13:39.304573 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.304667 kubelet[3171]: W0514 18:13:39.304631 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.304667 kubelet[3171]: E0514 18:13:39.304645 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.304846 kubelet[3171]: E0514 18:13:39.304823 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.304846 kubelet[3171]: W0514 18:13:39.304830 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.304886 kubelet[3171]: E0514 18:13:39.304845 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.305024 kubelet[3171]: E0514 18:13:39.304996 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.305024 kubelet[3171]: W0514 18:13:39.305003 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.305024 kubelet[3171]: E0514 18:13:39.305015 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.305195 kubelet[3171]: E0514 18:13:39.305170 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.305195 kubelet[3171]: W0514 18:13:39.305175 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.305195 kubelet[3171]: E0514 18:13:39.305187 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.305345 kubelet[3171]: E0514 18:13:39.305340 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.305416 kubelet[3171]: W0514 18:13:39.305375 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.305471 kubelet[3171]: E0514 18:13:39.305460 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.305568 kubelet[3171]: E0514 18:13:39.305558 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.305610 kubelet[3171]: W0514 18:13:39.305596 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.305688 kubelet[3171]: E0514 18:13:39.305670 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.305816 kubelet[3171]: E0514 18:13:39.305803 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.305816 kubelet[3171]: W0514 18:13:39.305809 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.305921 kubelet[3171]: E0514 18:13:39.305872 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.306027 kubelet[3171]: E0514 18:13:39.305995 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.306027 kubelet[3171]: W0514 18:13:39.306001 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.306027 kubelet[3171]: E0514 18:13:39.306009 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.306175 kubelet[3171]: E0514 18:13:39.306158 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.306175 kubelet[3171]: W0514 18:13:39.306163 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.306242 kubelet[3171]: E0514 18:13:39.306226 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.306333 kubelet[3171]: E0514 18:13:39.306324 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.306355 kubelet[3171]: W0514 18:13:39.306333 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.306355 kubelet[3171]: E0514 18:13:39.306343 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.306523 kubelet[3171]: E0514 18:13:39.306435 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.306523 kubelet[3171]: W0514 18:13:39.306442 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.306523 kubelet[3171]: E0514 18:13:39.306512 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.306620 kubelet[3171]: E0514 18:13:39.306603 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.306620 kubelet[3171]: W0514 18:13:39.306612 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.306662 kubelet[3171]: E0514 18:13:39.306621 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.307242 kubelet[3171]: E0514 18:13:39.306718 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.307242 kubelet[3171]: W0514 18:13:39.306722 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.307242 kubelet[3171]: E0514 18:13:39.306728 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.307242 kubelet[3171]: E0514 18:13:39.306807 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.307242 kubelet[3171]: W0514 18:13:39.306811 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.307242 kubelet[3171]: E0514 18:13:39.306887 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.307242 kubelet[3171]: W0514 18:13:39.306891 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.307242 kubelet[3171]: E0514 18:13:39.306957 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.307242 kubelet[3171]: E0514 18:13:39.306975 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.307242 kubelet[3171]: E0514 18:13:39.307006 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.307453 kubelet[3171]: W0514 18:13:39.307009 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.307453 kubelet[3171]: E0514 18:13:39.307020 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.307453 kubelet[3171]: E0514 18:13:39.307245 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.307453 kubelet[3171]: W0514 18:13:39.307250 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.307453 kubelet[3171]: E0514 18:13:39.307258 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.307453 kubelet[3171]: E0514 18:13:39.307366 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.307453 kubelet[3171]: W0514 18:13:39.307370 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.307453 kubelet[3171]: E0514 18:13:39.307382 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.307598 kubelet[3171]: E0514 18:13:39.307468 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.307598 kubelet[3171]: W0514 18:13:39.307473 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.307598 kubelet[3171]: E0514 18:13:39.307479 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.307598 kubelet[3171]: E0514 18:13:39.307578 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.307598 kubelet[3171]: W0514 18:13:39.307583 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.307598 kubelet[3171]: E0514 18:13:39.307589 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.343607 kubelet[3171]: E0514 18:13:39.343483 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:39.343607 kubelet[3171]: W0514 18:13:39.343496 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:39.343607 kubelet[3171]: E0514 18:13:39.343507 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:39.545789 containerd[1724]: time="2025-05-14T18:13:39.545726349Z" level=info msg="connecting to shim c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c" address="unix:///run/containerd/s/7eb71ceb47e07959424c885d7813668b4a828b621f4f3061a34177a448c3272b" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:39.599887 systemd[1]: Started cri-containerd-c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c.scope - libcontainer container c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c. May 14 18:13:39.682122 containerd[1724]: time="2025-05-14T18:13:39.682065123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wztbc,Uid:5a141316-5dcb-47ca-9c32-2e73ecdeb347,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\"" May 14 18:13:40.187565 kubelet[3171]: E0514 18:13:40.187526 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:42.187602 kubelet[3171]: E0514 18:13:42.187562 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:43.575280 containerd[1724]: time="2025-05-14T18:13:43.575194797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:43.624133 containerd[1724]: time="2025-05-14T18:13:43.624086080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 18:13:43.627310 containerd[1724]: time="2025-05-14T18:13:43.627275136Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:43.686786 containerd[1724]: time="2025-05-14T18:13:43.686740657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:43.687483 containerd[1724]: time="2025-05-14T18:13:43.687196309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 4.541029813s" May 14 18:13:43.687483 containerd[1724]: time="2025-05-14T18:13:43.687245659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 18:13:43.688045 containerd[1724]: time="2025-05-14T18:13:43.688025703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 18:13:43.694079 containerd[1724]: time="2025-05-14T18:13:43.693276868Z" level=info msg="CreateContainer within sandbox \"9d7973c7f7cfbe39540bf02371ff63a2f4f693ee730fcbe680b8a7f42e92e449\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 18:13:43.882863 containerd[1724]: time="2025-05-14T18:13:43.882812339Z" level=info msg="Container 252d800dede1ec3fba24cb37036ebcbd2e4fdc01f94446b7376fbbe8ecb489f2: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:43.888403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6889109.mount: Deactivated successfully. May 14 18:13:44.084220 containerd[1724]: time="2025-05-14T18:13:44.084120461Z" level=info msg="CreateContainer within sandbox \"9d7973c7f7cfbe39540bf02371ff63a2f4f693ee730fcbe680b8a7f42e92e449\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"252d800dede1ec3fba24cb37036ebcbd2e4fdc01f94446b7376fbbe8ecb489f2\"" May 14 18:13:44.085709 containerd[1724]: time="2025-05-14T18:13:44.084931572Z" level=info msg="StartContainer for \"252d800dede1ec3fba24cb37036ebcbd2e4fdc01f94446b7376fbbe8ecb489f2\"" May 14 18:13:44.086337 containerd[1724]: time="2025-05-14T18:13:44.086305101Z" level=info msg="connecting to shim 252d800dede1ec3fba24cb37036ebcbd2e4fdc01f94446b7376fbbe8ecb489f2" address="unix:///run/containerd/s/13f88f6cba4b3b7737b54f80f4f97e5af83b75db8b2276c3e2918ea061d99309" protocol=ttrpc version=3 May 14 18:13:44.107817 systemd[1]: Started cri-containerd-252d800dede1ec3fba24cb37036ebcbd2e4fdc01f94446b7376fbbe8ecb489f2.scope - libcontainer container 252d800dede1ec3fba24cb37036ebcbd2e4fdc01f94446b7376fbbe8ecb489f2. May 14 18:13:44.178915 containerd[1724]: time="2025-05-14T18:13:44.178823214Z" level=info msg="StartContainer for \"252d800dede1ec3fba24cb37036ebcbd2e4fdc01f94446b7376fbbe8ecb489f2\" returns successfully" May 14 18:13:44.187951 kubelet[3171]: E0514 18:13:44.187911 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:44.311084 kubelet[3171]: E0514 18:13:44.311066 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.311084 kubelet[3171]: W0514 18:13:44.311082 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.311202 kubelet[3171]: E0514 18:13:44.311098 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.311202 kubelet[3171]: E0514 18:13:44.311194 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.311202 kubelet[3171]: W0514 18:13:44.311199 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.311289 kubelet[3171]: E0514 18:13:44.311205 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.311311 kubelet[3171]: E0514 18:13:44.311307 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.311334 kubelet[3171]: W0514 18:13:44.311314 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.311334 kubelet[3171]: E0514 18:13:44.311322 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.311464 kubelet[3171]: E0514 18:13:44.311450 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.311464 kubelet[3171]: W0514 18:13:44.311457 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.311506 kubelet[3171]: E0514 18:13:44.311463 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.311562 kubelet[3171]: E0514 18:13:44.311551 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.311562 kubelet[3171]: W0514 18:13:44.311559 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.311612 kubelet[3171]: E0514 18:13:44.311566 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.311650 kubelet[3171]: E0514 18:13:44.311644 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.311672 kubelet[3171]: W0514 18:13:44.311651 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.311672 kubelet[3171]: E0514 18:13:44.311656 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.311824 kubelet[3171]: E0514 18:13:44.311807 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.311824 kubelet[3171]: W0514 18:13:44.311815 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.311893 kubelet[3171]: E0514 18:13:44.311827 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.311923 kubelet[3171]: E0514 18:13:44.311921 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.311941 kubelet[3171]: W0514 18:13:44.311926 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.311941 kubelet[3171]: E0514 18:13:44.311932 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.312028 kubelet[3171]: E0514 18:13:44.312020 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.312028 kubelet[3171]: W0514 18:13:44.312026 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.312073 kubelet[3171]: E0514 18:13:44.312032 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.312117 kubelet[3171]: E0514 18:13:44.312110 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.312117 kubelet[3171]: W0514 18:13:44.312116 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.312156 kubelet[3171]: E0514 18:13:44.312121 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.312238 kubelet[3171]: E0514 18:13:44.312229 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.312238 kubelet[3171]: W0514 18:13:44.312236 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.312291 kubelet[3171]: E0514 18:13:44.312241 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.312330 kubelet[3171]: E0514 18:13:44.312322 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.312330 kubelet[3171]: W0514 18:13:44.312328 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.312395 kubelet[3171]: E0514 18:13:44.312333 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.312437 kubelet[3171]: E0514 18:13:44.312428 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.312437 kubelet[3171]: W0514 18:13:44.312434 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.312477 kubelet[3171]: E0514 18:13:44.312440 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.312521 kubelet[3171]: E0514 18:13:44.312514 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.312521 kubelet[3171]: W0514 18:13:44.312519 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.312560 kubelet[3171]: E0514 18:13:44.312525 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.312608 kubelet[3171]: E0514 18:13:44.312601 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.312608 kubelet[3171]: W0514 18:13:44.312607 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.312654 kubelet[3171]: E0514 18:13:44.312612 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.334894 kubelet[3171]: E0514 18:13:44.334877 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.334894 kubelet[3171]: W0514 18:13:44.334890 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.334994 kubelet[3171]: E0514 18:13:44.334909 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335093 kubelet[3171]: E0514 18:13:44.335020 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335093 kubelet[3171]: W0514 18:13:44.335025 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335093 kubelet[3171]: E0514 18:13:44.335031 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335169 kubelet[3171]: E0514 18:13:44.335128 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335169 kubelet[3171]: W0514 18:13:44.335137 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335169 kubelet[3171]: E0514 18:13:44.335153 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335249 kubelet[3171]: E0514 18:13:44.335247 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335269 kubelet[3171]: W0514 18:13:44.335254 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335269 kubelet[3171]: E0514 18:13:44.335266 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335359 kubelet[3171]: E0514 18:13:44.335345 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335359 kubelet[3171]: W0514 18:13:44.335352 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335402 kubelet[3171]: E0514 18:13:44.335363 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335485 kubelet[3171]: E0514 18:13:44.335477 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335485 kubelet[3171]: W0514 18:13:44.335484 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335528 kubelet[3171]: E0514 18:13:44.335496 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335625 kubelet[3171]: E0514 18:13:44.335617 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335625 kubelet[3171]: W0514 18:13:44.335624 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335664 kubelet[3171]: E0514 18:13:44.335632 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335752 kubelet[3171]: E0514 18:13:44.335744 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335752 kubelet[3171]: W0514 18:13:44.335751 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335799 kubelet[3171]: E0514 18:13:44.335763 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335850 kubelet[3171]: E0514 18:13:44.335843 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335850 kubelet[3171]: W0514 18:13:44.335849 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335889 kubelet[3171]: E0514 18:13:44.335860 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.335946 kubelet[3171]: E0514 18:13:44.335938 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.335946 kubelet[3171]: W0514 18:13:44.335944 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.335984 kubelet[3171]: E0514 18:13:44.335955 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.336069 kubelet[3171]: E0514 18:13:44.336059 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.336069 kubelet[3171]: W0514 18:13:44.336067 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.336114 kubelet[3171]: E0514 18:13:44.336076 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.336240 kubelet[3171]: E0514 18:13:44.336224 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.336240 kubelet[3171]: W0514 18:13:44.336232 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.336285 kubelet[3171]: E0514 18:13:44.336253 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.336367 kubelet[3171]: E0514 18:13:44.336358 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.336367 kubelet[3171]: W0514 18:13:44.336365 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.336418 kubelet[3171]: E0514 18:13:44.336374 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.336465 kubelet[3171]: E0514 18:13:44.336457 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.336465 kubelet[3171]: W0514 18:13:44.336463 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.336504 kubelet[3171]: E0514 18:13:44.336469 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.336565 kubelet[3171]: E0514 18:13:44.336556 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.336565 kubelet[3171]: W0514 18:13:44.336563 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.336608 kubelet[3171]: E0514 18:13:44.336572 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.336693 kubelet[3171]: E0514 18:13:44.336673 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.336693 kubelet[3171]: W0514 18:13:44.336692 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.336740 kubelet[3171]: E0514 18:13:44.336699 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.337021 kubelet[3171]: E0514 18:13:44.337009 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.337021 kubelet[3171]: W0514 18:13:44.337019 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.337074 kubelet[3171]: E0514 18:13:44.337036 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:44.337144 kubelet[3171]: E0514 18:13:44.337136 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:44.337144 kubelet[3171]: W0514 18:13:44.337142 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:44.337177 kubelet[3171]: E0514 18:13:44.337148 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.252147 kubelet[3171]: I0514 18:13:45.252124 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:13:45.319742 kubelet[3171]: E0514 18:13:45.319717 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.319742 kubelet[3171]: W0514 18:13:45.319734 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.319855 kubelet[3171]: E0514 18:13:45.319749 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.319855 kubelet[3171]: E0514 18:13:45.319834 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.319855 kubelet[3171]: W0514 18:13:45.319838 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.319855 kubelet[3171]: E0514 18:13:45.319845 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.319942 kubelet[3171]: E0514 18:13:45.319912 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.319942 kubelet[3171]: W0514 18:13:45.319916 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.319942 kubelet[3171]: E0514 18:13:45.319923 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320004 kubelet[3171]: E0514 18:13:45.319992 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320004 kubelet[3171]: W0514 18:13:45.319995 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320004 kubelet[3171]: E0514 18:13:45.320000 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320141 kubelet[3171]: E0514 18:13:45.320116 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320141 kubelet[3171]: W0514 18:13:45.320139 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320194 kubelet[3171]: E0514 18:13:45.320147 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320275 kubelet[3171]: E0514 18:13:45.320254 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320275 kubelet[3171]: W0514 18:13:45.320274 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320316 kubelet[3171]: E0514 18:13:45.320280 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320388 kubelet[3171]: E0514 18:13:45.320366 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320388 kubelet[3171]: W0514 18:13:45.320386 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320429 kubelet[3171]: E0514 18:13:45.320391 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320479 kubelet[3171]: E0514 18:13:45.320458 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320479 kubelet[3171]: W0514 18:13:45.320478 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320524 kubelet[3171]: E0514 18:13:45.320483 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320597 kubelet[3171]: E0514 18:13:45.320571 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320597 kubelet[3171]: W0514 18:13:45.320594 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320647 kubelet[3171]: E0514 18:13:45.320600 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320690 kubelet[3171]: E0514 18:13:45.320671 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320714 kubelet[3171]: W0514 18:13:45.320691 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320714 kubelet[3171]: E0514 18:13:45.320696 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320793 kubelet[3171]: E0514 18:13:45.320771 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320793 kubelet[3171]: W0514 18:13:45.320791 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320841 kubelet[3171]: E0514 18:13:45.320797 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320895 kubelet[3171]: E0514 18:13:45.320874 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320895 kubelet[3171]: W0514 18:13:45.320893 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.320937 kubelet[3171]: E0514 18:13:45.320899 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.320993 kubelet[3171]: E0514 18:13:45.320986 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.320993 kubelet[3171]: W0514 18:13:45.320991 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.321037 kubelet[3171]: E0514 18:13:45.320996 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.321070 kubelet[3171]: E0514 18:13:45.321064 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.321091 kubelet[3171]: W0514 18:13:45.321070 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.321091 kubelet[3171]: E0514 18:13:45.321075 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.321151 kubelet[3171]: E0514 18:13:45.321144 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.321151 kubelet[3171]: W0514 18:13:45.321149 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.321190 kubelet[3171]: E0514 18:13:45.321154 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.340444 kubelet[3171]: E0514 18:13:45.340422 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.340444 kubelet[3171]: W0514 18:13:45.340437 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.340540 kubelet[3171]: E0514 18:13:45.340450 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.340562 kubelet[3171]: E0514 18:13:45.340558 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.340582 kubelet[3171]: W0514 18:13:45.340563 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.340582 kubelet[3171]: E0514 18:13:45.340570 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.340745 kubelet[3171]: E0514 18:13:45.340722 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.340745 kubelet[3171]: W0514 18:13:45.340744 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.340791 kubelet[3171]: E0514 18:13:45.340757 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.340924 kubelet[3171]: E0514 18:13:45.340903 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.340924 kubelet[3171]: W0514 18:13:45.340923 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.340968 kubelet[3171]: E0514 18:13:45.340935 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.341086 kubelet[3171]: E0514 18:13:45.341074 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.341086 kubelet[3171]: W0514 18:13:45.341083 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.341139 kubelet[3171]: E0514 18:13:45.341094 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.341201 kubelet[3171]: E0514 18:13:45.341192 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.341201 kubelet[3171]: W0514 18:13:45.341199 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.341246 kubelet[3171]: E0514 18:13:45.341205 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.341376 kubelet[3171]: E0514 18:13:45.341357 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.341376 kubelet[3171]: W0514 18:13:45.341373 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.341415 kubelet[3171]: E0514 18:13:45.341381 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.341545 kubelet[3171]: E0514 18:13:45.341515 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.341545 kubelet[3171]: W0514 18:13:45.341522 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.341545 kubelet[3171]: E0514 18:13:45.341529 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.341623 kubelet[3171]: E0514 18:13:45.341614 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.341623 kubelet[3171]: W0514 18:13:45.341621 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.341676 kubelet[3171]: E0514 18:13:45.341630 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.341748 kubelet[3171]: E0514 18:13:45.341725 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.341748 kubelet[3171]: W0514 18:13:45.341745 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.341804 kubelet[3171]: E0514 18:13:45.341753 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.341859 kubelet[3171]: E0514 18:13:45.341819 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.341859 kubelet[3171]: W0514 18:13:45.341823 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.341859 kubelet[3171]: E0514 18:13:45.341834 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.341931 kubelet[3171]: E0514 18:13:45.341912 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.341931 kubelet[3171]: W0514 18:13:45.341917 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.341931 kubelet[3171]: E0514 18:13:45.341925 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.342025 kubelet[3171]: E0514 18:13:45.341993 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.342025 kubelet[3171]: W0514 18:13:45.341998 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.342025 kubelet[3171]: E0514 18:13:45.342005 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.342103 kubelet[3171]: E0514 18:13:45.342067 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.342103 kubelet[3171]: W0514 18:13:45.342072 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.342103 kubelet[3171]: E0514 18:13:45.342080 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.342193 kubelet[3171]: E0514 18:13:45.342164 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.342193 kubelet[3171]: W0514 18:13:45.342168 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.342193 kubelet[3171]: E0514 18:13:45.342176 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.342406 kubelet[3171]: E0514 18:13:45.342391 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.342406 kubelet[3171]: W0514 18:13:45.342403 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.342459 kubelet[3171]: E0514 18:13:45.342418 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.342567 kubelet[3171]: E0514 18:13:45.342549 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.342567 kubelet[3171]: W0514 18:13:45.342562 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.342619 kubelet[3171]: E0514 18:13:45.342571 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:45.342665 kubelet[3171]: E0514 18:13:45.342657 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:13:45.342665 kubelet[3171]: W0514 18:13:45.342662 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:13:45.342725 kubelet[3171]: E0514 18:13:45.342668 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:13:46.174360 containerd[1724]: time="2025-05-14T18:13:46.174325909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:46.187861 kubelet[3171]: E0514 18:13:46.187815 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:46.220239 containerd[1724]: time="2025-05-14T18:13:46.220198793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 18:13:46.222754 containerd[1724]: time="2025-05-14T18:13:46.222710279Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:46.283519 containerd[1724]: time="2025-05-14T18:13:46.283294690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:46.284927 containerd[1724]: time="2025-05-14T18:13:46.284801494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.596748738s" May 14 18:13:46.284927 containerd[1724]: time="2025-05-14T18:13:46.284833964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 18:13:46.286620 containerd[1724]: time="2025-05-14T18:13:46.286590795Z" level=info msg="CreateContainer within sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:13:46.476728 containerd[1724]: time="2025-05-14T18:13:46.476273126Z" level=info msg="Container 1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:46.633049 containerd[1724]: time="2025-05-14T18:13:46.633021461Z" level=info msg="CreateContainer within sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\"" May 14 18:13:46.633458 containerd[1724]: time="2025-05-14T18:13:46.633368458Z" level=info msg="StartContainer for \"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\"" May 14 18:13:46.634906 containerd[1724]: time="2025-05-14T18:13:46.634871701Z" level=info msg="connecting to shim 1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd" address="unix:///run/containerd/s/7eb71ceb47e07959424c885d7813668b4a828b621f4f3061a34177a448c3272b" protocol=ttrpc version=3 May 14 18:13:46.662818 systemd[1]: Started cri-containerd-1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd.scope - libcontainer container 1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd. May 14 18:13:46.704051 containerd[1724]: time="2025-05-14T18:13:46.704030999Z" level=info msg="StartContainer for \"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" returns successfully" May 14 18:13:46.711949 systemd[1]: cri-containerd-1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd.scope: Deactivated successfully. May 14 18:13:46.715273 containerd[1724]: time="2025-05-14T18:13:46.715236900Z" level=info msg="received exit event container_id:\"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" id:\"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" pid:3855 exited_at:{seconds:1747246426 nanos:714458730}" May 14 18:13:46.715630 containerd[1724]: time="2025-05-14T18:13:46.715605034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" id:\"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" pid:3855 exited_at:{seconds:1747246426 nanos:714458730}" May 14 18:13:46.738940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd-rootfs.mount: Deactivated successfully. May 14 18:13:47.268910 kubelet[3171]: I0514 18:13:47.268770 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cbb5997bd-5rz9x" podStartSLOduration=4.726641659 podStartE2EDuration="9.268662278s" podCreationTimestamp="2025-05-14 18:13:38 +0000 UTC" firstStartedPulling="2025-05-14 18:13:39.145905171 +0000 UTC m=+18.027020601" lastFinishedPulling="2025-05-14 18:13:43.687925794 +0000 UTC m=+22.569041220" observedRunningTime="2025-05-14 18:13:44.285403047 +0000 UTC m=+23.166518480" watchObservedRunningTime="2025-05-14 18:13:47.268662278 +0000 UTC m=+26.149777713" May 14 18:13:48.188303 kubelet[3171]: E0514 18:13:48.188269 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:50.187769 kubelet[3171]: E0514 18:13:50.187701 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:51.056636 kubelet[3171]: I0514 18:13:51.056446 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:13:52.188029 kubelet[3171]: E0514 18:13:52.187976 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:54.187501 kubelet[3171]: E0514 18:13:54.187471 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:56.187692 kubelet[3171]: E0514 18:13:56.187621 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:56.715743 containerd[1724]: time="2025-05-14T18:13:56.715709329Z" level=error msg="failed to handle container TaskExit event container_id:\"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" id:\"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" pid:3855 exited_at:{seconds:1747246426 nanos:714458730}" error="failed to stop container: failed to delete task: context deadline exceeded" May 14 18:13:58.187666 kubelet[3171]: E0514 18:13:58.187606 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:13:58.342980 containerd[1724]: time="2025-05-14T18:13:58.342931916Z" level=info msg="TaskExit event container_id:\"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" id:\"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" pid:3855 exited_at:{seconds:1747246426 nanos:714458730}" May 14 18:14:00.187618 kubelet[3171]: E0514 18:14:00.187557 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:00.343692 containerd[1724]: time="2025-05-14T18:14:00.343637931Z" level=error msg="get state for 1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd" error="context deadline exceeded" May 14 18:14:00.343692 containerd[1724]: time="2025-05-14T18:14:00.343694955Z" level=warning msg="unknown status" status=0 May 14 18:14:01.574132 containerd[1724]: time="2025-05-14T18:14:01.574044974Z" level=error msg="ttrpc: received message on inactive stream" stream=35 May 14 18:14:01.574627 containerd[1724]: time="2025-05-14T18:14:01.574159593Z" level=error msg="ttrpc: received message on inactive stream" stream=31 May 14 18:14:01.576134 containerd[1724]: time="2025-05-14T18:14:01.576092641Z" level=info msg="Ensure that container 1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd in task-service has been cleanup successfully" May 14 18:14:02.187891 kubelet[3171]: E0514 18:14:02.187860 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:02.279434 containerd[1724]: time="2025-05-14T18:14:02.279403050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 18:14:04.187790 kubelet[3171]: E0514 18:14:04.187734 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:06.187610 kubelet[3171]: E0514 18:14:06.187536 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:08.187623 kubelet[3171]: E0514 18:14:08.187582 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:10.026485 containerd[1724]: time="2025-05-14T18:14:10.026454835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:10.083732 containerd[1724]: time="2025-05-14T18:14:10.083696653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 18:14:10.130153 containerd[1724]: time="2025-05-14T18:14:10.130031810Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:10.135726 containerd[1724]: time="2025-05-14T18:14:10.135666957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:10.136335 containerd[1724]: time="2025-05-14T18:14:10.136214742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 7.856693696s" May 14 18:14:10.136335 containerd[1724]: time="2025-05-14T18:14:10.136238319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 18:14:10.138369 containerd[1724]: time="2025-05-14T18:14:10.138345838Z" level=info msg="CreateContainer within sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:14:10.188373 kubelet[3171]: E0514 18:14:10.188345 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:10.338065 containerd[1724]: time="2025-05-14T18:14:10.336789914Z" level=info msg="Container ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:10.435542 containerd[1724]: time="2025-05-14T18:14:10.435520155Z" level=info msg="CreateContainer within sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\"" May 14 18:14:10.435854 containerd[1724]: time="2025-05-14T18:14:10.435809100Z" level=info msg="StartContainer for \"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\"" May 14 18:14:10.437184 containerd[1724]: time="2025-05-14T18:14:10.437159002Z" level=info msg="connecting to shim ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328" address="unix:///run/containerd/s/7eb71ceb47e07959424c885d7813668b4a828b621f4f3061a34177a448c3272b" protocol=ttrpc version=3 May 14 18:14:10.455828 systemd[1]: Started cri-containerd-ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328.scope - libcontainer container ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328. May 14 18:14:10.484710 containerd[1724]: time="2025-05-14T18:14:10.484686911Z" level=info msg="StartContainer for \"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\" returns successfully" May 14 18:14:12.188355 kubelet[3171]: E0514 18:14:12.188303 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:14.187992 kubelet[3171]: E0514 18:14:14.187961 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:16.187772 kubelet[3171]: E0514 18:14:16.187666 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:18.188369 kubelet[3171]: E0514 18:14:18.188328 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:18.354952 containerd[1724]: time="2025-05-14T18:14:18.354911400Z" level=info msg="received exit event container_id:\"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\" id:\"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\" pid:3916 exited_at:{seconds:1747246458 nanos:354696535}" May 14 18:14:18.355067 systemd[1]: cri-containerd-ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328.scope: Deactivated successfully. May 14 18:14:18.355323 systemd[1]: cri-containerd-ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328.scope: Consumed 335ms CPU time, 173M memory peak, 154M written to disk. May 14 18:14:18.355798 containerd[1724]: time="2025-05-14T18:14:18.354919170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\" id:\"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\" pid:3916 exited_at:{seconds:1747246458 nanos:354696535}" May 14 18:14:18.363101 kubelet[3171]: I0514 18:14:18.363074 3171 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 18:14:18.382719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328-rootfs.mount: Deactivated successfully. May 14 18:14:18.440858 systemd[1]: Created slice kubepods-besteffort-podab7f552a_1fe4_440b_ae6d_7b449d960ef9.slice - libcontainer container kubepods-besteffort-podab7f552a_1fe4_440b_ae6d_7b449d960ef9.slice. May 14 18:14:18.532869 systemd[1]: Created slice kubepods-besteffort-pod64dc643b_9fc0_4c8d_af95_cff808284f39.slice - libcontainer container kubepods-besteffort-pod64dc643b_9fc0_4c8d_af95_cff808284f39.slice. May 14 18:14:18.537481 systemd[1]: Created slice kubepods-burstable-pod7d98339e_8cf1_447f_bf29_3e0a7594a179.slice - libcontainer container kubepods-burstable-pod7d98339e_8cf1_447f_bf29_3e0a7594a179.slice. May 14 18:14:18.541884 systemd[1]: Created slice kubepods-burstable-pod117ed482_5925_4d0e_a8c6_540f64673e00.slice - libcontainer container kubepods-burstable-pod117ed482_5925_4d0e_a8c6_540f64673e00.slice. May 14 18:14:18.545949 systemd[1]: Created slice kubepods-besteffort-pod42f0af52_9025_4d2d_8452_e97d6dcfc33e.slice - libcontainer container kubepods-besteffort-pod42f0af52_9025_4d2d_8452_e97d6dcfc33e.slice. May 14 18:14:18.562917 kubelet[3171]: I0514 18:14:18.562880 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2jqs\" (UniqueName: \"kubernetes.io/projected/64dc643b-9fc0-4c8d-af95-cff808284f39-kube-api-access-l2jqs\") pod \"calico-kube-controllers-6f9774cf66-vzhj6\" (UID: \"64dc643b-9fc0-4c8d-af95-cff808284f39\") " pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" May 14 18:14:18.563048 kubelet[3171]: I0514 18:14:18.562925 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab7f552a-1fe4-440b-ae6d-7b449d960ef9-calico-apiserver-certs\") pod \"calico-apiserver-76c49d767b-77j92\" (UID: \"ab7f552a-1fe4-440b-ae6d-7b449d960ef9\") " pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" May 14 18:14:18.563048 kubelet[3171]: I0514 18:14:18.562943 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d98339e-8cf1-447f-bf29-3e0a7594a179-config-volume\") pod \"coredns-668d6bf9bc-ffz6l\" (UID: \"7d98339e-8cf1-447f-bf29-3e0a7594a179\") " pod="kube-system/coredns-668d6bf9bc-ffz6l" May 14 18:14:18.563048 kubelet[3171]: I0514 18:14:18.562959 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkvn8\" (UniqueName: \"kubernetes.io/projected/ab7f552a-1fe4-440b-ae6d-7b449d960ef9-kube-api-access-qkvn8\") pod \"calico-apiserver-76c49d767b-77j92\" (UID: \"ab7f552a-1fe4-440b-ae6d-7b449d960ef9\") " pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" May 14 18:14:18.563048 kubelet[3171]: I0514 18:14:18.562978 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64dc643b-9fc0-4c8d-af95-cff808284f39-tigera-ca-bundle\") pod \"calico-kube-controllers-6f9774cf66-vzhj6\" (UID: \"64dc643b-9fc0-4c8d-af95-cff808284f39\") " pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" May 14 18:14:18.563048 kubelet[3171]: I0514 18:14:18.562992 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz9rc\" (UniqueName: \"kubernetes.io/projected/7d98339e-8cf1-447f-bf29-3e0a7594a179-kube-api-access-cz9rc\") pod \"coredns-668d6bf9bc-ffz6l\" (UID: \"7d98339e-8cf1-447f-bf29-3e0a7594a179\") " pod="kube-system/coredns-668d6bf9bc-ffz6l" May 14 18:14:18.663911 kubelet[3171]: I0514 18:14:18.663890 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/117ed482-5925-4d0e-a8c6-540f64673e00-config-volume\") pod \"coredns-668d6bf9bc-649rn\" (UID: \"117ed482-5925-4d0e-a8c6-540f64673e00\") " pod="kube-system/coredns-668d6bf9bc-649rn" May 14 18:14:18.663977 kubelet[3171]: I0514 18:14:18.663925 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9xwq\" (UniqueName: \"kubernetes.io/projected/42f0af52-9025-4d2d-8452-e97d6dcfc33e-kube-api-access-j9xwq\") pod \"calico-apiserver-76c49d767b-49pnq\" (UID: \"42f0af52-9025-4d2d-8452-e97d6dcfc33e\") " pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" May 14 18:14:18.663977 kubelet[3171]: I0514 18:14:18.663946 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42f0af52-9025-4d2d-8452-e97d6dcfc33e-calico-apiserver-certs\") pod \"calico-apiserver-76c49d767b-49pnq\" (UID: \"42f0af52-9025-4d2d-8452-e97d6dcfc33e\") " pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" May 14 18:14:18.664024 kubelet[3171]: I0514 18:14:18.663983 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdvn9\" (UniqueName: \"kubernetes.io/projected/117ed482-5925-4d0e-a8c6-540f64673e00-kube-api-access-fdvn9\") pod \"coredns-668d6bf9bc-649rn\" (UID: \"117ed482-5925-4d0e-a8c6-540f64673e00\") " pod="kube-system/coredns-668d6bf9bc-649rn" May 14 18:14:18.836979 containerd[1724]: time="2025-05-14T18:14:18.836940715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9774cf66-vzhj6,Uid:64dc643b-9fc0-4c8d-af95-cff808284f39,Namespace:calico-system,Attempt:0,}" May 14 18:14:18.840447 containerd[1724]: time="2025-05-14T18:14:18.840413896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ffz6l,Uid:7d98339e-8cf1-447f-bf29-3e0a7594a179,Namespace:kube-system,Attempt:0,}" May 14 18:14:18.844921 containerd[1724]: time="2025-05-14T18:14:18.844888934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-649rn,Uid:117ed482-5925-4d0e-a8c6-540f64673e00,Namespace:kube-system,Attempt:0,}" May 14 18:14:18.848487 containerd[1724]: time="2025-05-14T18:14:18.848465789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-49pnq,Uid:42f0af52-9025-4d2d-8452-e97d6dcfc33e,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:19.043659 containerd[1724]: time="2025-05-14T18:14:19.043623419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-77j92,Uid:ab7f552a-1fe4-440b-ae6d-7b449d960ef9,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:20.193089 systemd[1]: Created slice kubepods-besteffort-pod5e55bc54_125f_45b4_a2a9_22003bd5a2c5.slice - libcontainer container kubepods-besteffort-pod5e55bc54_125f_45b4_a2a9_22003bd5a2c5.slice. May 14 18:14:20.195333 containerd[1724]: time="2025-05-14T18:14:20.195289198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p469r,Uid:5e55bc54-125f-45b4-a2a9-22003bd5a2c5,Namespace:calico-system,Attempt:0,}" May 14 18:14:26.318146 containerd[1724]: time="2025-05-14T18:14:26.318078010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 18:14:26.415983 containerd[1724]: time="2025-05-14T18:14:26.415951174Z" level=error msg="Failed to destroy network for sandbox \"69381bdf3a11faa1435aa738ff3fdd3df4da61e9a9bf25ab69baf5006515c1f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.417569 systemd[1]: run-netns-cni\x2d873b27e4\x2d1854\x2da19b\x2def31\x2d5cdfe488402e.mount: Deactivated successfully. May 14 18:14:26.457409 containerd[1724]: time="2025-05-14T18:14:26.457367952Z" level=error msg="Failed to destroy network for sandbox \"3afc2fc87cfcfc2933a5ee2d4d4e0602bd479fa8dd42e7189d54843db8aea7e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.550799 containerd[1724]: time="2025-05-14T18:14:26.550754953Z" level=error msg="Failed to destroy network for sandbox \"efa69f65232484bbcac8abfa44baae1f7a376ddcb92336d31a185af515643ae4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.617305 containerd[1724]: time="2025-05-14T18:14:26.617217354Z" level=error msg="Failed to destroy network for sandbox \"20fae9f8cfba6a5e3c96e56d461ac79275dbae4815a11c46f3416f1f2ab4266a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.707412 containerd[1724]: time="2025-05-14T18:14:26.707381022Z" level=error msg="Failed to destroy network for sandbox \"2e2453412065f07cfb97ae1625052131eafcfad5a33234ae569b0cac2afe1c64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.772931 containerd[1724]: time="2025-05-14T18:14:26.772895766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9774cf66-vzhj6,Uid:64dc643b-9fc0-4c8d-af95-cff808284f39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69381bdf3a11faa1435aa738ff3fdd3df4da61e9a9bf25ab69baf5006515c1f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.773326 kubelet[3171]: E0514 18:14:26.773295 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69381bdf3a11faa1435aa738ff3fdd3df4da61e9a9bf25ab69baf5006515c1f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.773896 kubelet[3171]: E0514 18:14:26.773638 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69381bdf3a11faa1435aa738ff3fdd3df4da61e9a9bf25ab69baf5006515c1f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" May 14 18:14:26.773896 kubelet[3171]: E0514 18:14:26.773695 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69381bdf3a11faa1435aa738ff3fdd3df4da61e9a9bf25ab69baf5006515c1f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" May 14 18:14:26.773896 kubelet[3171]: E0514 18:14:26.773762 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f9774cf66-vzhj6_calico-system(64dc643b-9fc0-4c8d-af95-cff808284f39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f9774cf66-vzhj6_calico-system(64dc643b-9fc0-4c8d-af95-cff808284f39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69381bdf3a11faa1435aa738ff3fdd3df4da61e9a9bf25ab69baf5006515c1f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" podUID="64dc643b-9fc0-4c8d-af95-cff808284f39" May 14 18:14:26.800742 containerd[1724]: time="2025-05-14T18:14:26.800704250Z" level=error msg="Failed to destroy network for sandbox \"1514bedb5259b9c3e41e349dc7dea4ed5a80601826942d8d0fda0e17a38e9514\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.818065 containerd[1724]: time="2025-05-14T18:14:26.818027219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ffz6l,Uid:7d98339e-8cf1-447f-bf29-3e0a7594a179,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afc2fc87cfcfc2933a5ee2d4d4e0602bd479fa8dd42e7189d54843db8aea7e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.818228 kubelet[3171]: E0514 18:14:26.818205 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afc2fc87cfcfc2933a5ee2d4d4e0602bd479fa8dd42e7189d54843db8aea7e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.818270 kubelet[3171]: E0514 18:14:26.818258 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afc2fc87cfcfc2933a5ee2d4d4e0602bd479fa8dd42e7189d54843db8aea7e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ffz6l" May 14 18:14:26.818298 kubelet[3171]: E0514 18:14:26.818278 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afc2fc87cfcfc2933a5ee2d4d4e0602bd479fa8dd42e7189d54843db8aea7e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ffz6l" May 14 18:14:26.818338 kubelet[3171]: E0514 18:14:26.818316 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ffz6l_kube-system(7d98339e-8cf1-447f-bf29-3e0a7594a179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ffz6l_kube-system(7d98339e-8cf1-447f-bf29-3e0a7594a179)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3afc2fc87cfcfc2933a5ee2d4d4e0602bd479fa8dd42e7189d54843db8aea7e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ffz6l" podUID="7d98339e-8cf1-447f-bf29-3e0a7594a179" May 14 18:14:26.820294 containerd[1724]: time="2025-05-14T18:14:26.820240258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-649rn,Uid:117ed482-5925-4d0e-a8c6-540f64673e00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa69f65232484bbcac8abfa44baae1f7a376ddcb92336d31a185af515643ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.820422 kubelet[3171]: E0514 18:14:26.820397 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa69f65232484bbcac8abfa44baae1f7a376ddcb92336d31a185af515643ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.820478 kubelet[3171]: E0514 18:14:26.820434 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa69f65232484bbcac8abfa44baae1f7a376ddcb92336d31a185af515643ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-649rn" May 14 18:14:26.820478 kubelet[3171]: E0514 18:14:26.820453 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa69f65232484bbcac8abfa44baae1f7a376ddcb92336d31a185af515643ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-649rn" May 14 18:14:26.820519 kubelet[3171]: E0514 18:14:26.820487 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-649rn_kube-system(117ed482-5925-4d0e-a8c6-540f64673e00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-649rn_kube-system(117ed482-5925-4d0e-a8c6-540f64673e00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efa69f65232484bbcac8abfa44baae1f7a376ddcb92336d31a185af515643ae4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-649rn" podUID="117ed482-5925-4d0e-a8c6-540f64673e00" May 14 18:14:26.872959 systemd[1]: run-netns-cni\x2dbf5fe8a6\x2d716c\x2d17d3\x2d1f7a\x2d7314c096f61b.mount: Deactivated successfully. May 14 18:14:26.873031 systemd[1]: run-netns-cni\x2d4aad6247\x2d9455\x2d997a\x2d9ea4\x2db62c9b8e22ca.mount: Deactivated successfully. May 14 18:14:26.873086 systemd[1]: run-netns-cni\x2d38aba5fe\x2d1e5a\x2dfa7e\x2d1fdb\x2de3578f4735a2.mount: Deactivated successfully. May 14 18:14:26.873130 systemd[1]: run-netns-cni\x2d372fbcfd\x2d3161\x2d6575\x2d2368\x2d67b63b20489f.mount: Deactivated successfully. May 14 18:14:26.873170 systemd[1]: run-netns-cni\x2d8cf91e55\x2dff6b\x2d2113\x2dea1b\x2de6e095986039.mount: Deactivated successfully. May 14 18:14:26.882137 containerd[1724]: time="2025-05-14T18:14:26.882081854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-49pnq,Uid:42f0af52-9025-4d2d-8452-e97d6dcfc33e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20fae9f8cfba6a5e3c96e56d461ac79275dbae4815a11c46f3416f1f2ab4266a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.882256 kubelet[3171]: E0514 18:14:26.882230 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20fae9f8cfba6a5e3c96e56d461ac79275dbae4815a11c46f3416f1f2ab4266a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.882293 kubelet[3171]: E0514 18:14:26.882265 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20fae9f8cfba6a5e3c96e56d461ac79275dbae4815a11c46f3416f1f2ab4266a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" May 14 18:14:26.882293 kubelet[3171]: E0514 18:14:26.882282 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20fae9f8cfba6a5e3c96e56d461ac79275dbae4815a11c46f3416f1f2ab4266a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" May 14 18:14:26.882337 kubelet[3171]: E0514 18:14:26.882315 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c49d767b-49pnq_calico-apiserver(42f0af52-9025-4d2d-8452-e97d6dcfc33e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c49d767b-49pnq_calico-apiserver(42f0af52-9025-4d2d-8452-e97d6dcfc33e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20fae9f8cfba6a5e3c96e56d461ac79275dbae4815a11c46f3416f1f2ab4266a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" podUID="42f0af52-9025-4d2d-8452-e97d6dcfc33e" May 14 18:14:26.884313 containerd[1724]: time="2025-05-14T18:14:26.884257536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-77j92,Uid:ab7f552a-1fe4-440b-ae6d-7b449d960ef9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e2453412065f07cfb97ae1625052131eafcfad5a33234ae569b0cac2afe1c64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.884487 kubelet[3171]: E0514 18:14:26.884459 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e2453412065f07cfb97ae1625052131eafcfad5a33234ae569b0cac2afe1c64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.884553 kubelet[3171]: E0514 18:14:26.884500 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e2453412065f07cfb97ae1625052131eafcfad5a33234ae569b0cac2afe1c64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" May 14 18:14:26.884553 kubelet[3171]: E0514 18:14:26.884515 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e2453412065f07cfb97ae1625052131eafcfad5a33234ae569b0cac2afe1c64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" May 14 18:14:26.884621 kubelet[3171]: E0514 18:14:26.884552 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c49d767b-77j92_calico-apiserver(ab7f552a-1fe4-440b-ae6d-7b449d960ef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c49d767b-77j92_calico-apiserver(ab7f552a-1fe4-440b-ae6d-7b449d960ef9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e2453412065f07cfb97ae1625052131eafcfad5a33234ae569b0cac2afe1c64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" podUID="ab7f552a-1fe4-440b-ae6d-7b449d960ef9" May 14 18:14:26.930305 containerd[1724]: time="2025-05-14T18:14:26.930278787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p469r,Uid:5e55bc54-125f-45b4-a2a9-22003bd5a2c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1514bedb5259b9c3e41e349dc7dea4ed5a80601826942d8d0fda0e17a38e9514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.930440 kubelet[3171]: E0514 18:14:26.930407 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1514bedb5259b9c3e41e349dc7dea4ed5a80601826942d8d0fda0e17a38e9514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:26.930481 kubelet[3171]: E0514 18:14:26.930442 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1514bedb5259b9c3e41e349dc7dea4ed5a80601826942d8d0fda0e17a38e9514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p469r" May 14 18:14:26.930481 kubelet[3171]: E0514 18:14:26.930458 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1514bedb5259b9c3e41e349dc7dea4ed5a80601826942d8d0fda0e17a38e9514\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p469r" May 14 18:14:26.930521 kubelet[3171]: E0514 18:14:26.930486 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p469r_calico-system(5e55bc54-125f-45b4-a2a9-22003bd5a2c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p469r_calico-system(5e55bc54-125f-45b4-a2a9-22003bd5a2c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1514bedb5259b9c3e41e349dc7dea4ed5a80601826942d8d0fda0e17a38e9514\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:38.188243 containerd[1724]: time="2025-05-14T18:14:38.188201729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9774cf66-vzhj6,Uid:64dc643b-9fc0-4c8d-af95-cff808284f39,Namespace:calico-system,Attempt:0,}" May 14 18:14:38.292746 containerd[1724]: time="2025-05-14T18:14:38.292710438Z" level=error msg="Failed to destroy network for sandbox \"93e8205ef495216e80978661f38351a82081577d53921b12c0a6a486bae1b5a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:38.295268 systemd[1]: run-netns-cni\x2dab06ab1f\x2dcdea\x2dda4a\x2dfaef\x2db0abbc7b0689.mount: Deactivated successfully. May 14 18:14:38.297315 containerd[1724]: time="2025-05-14T18:14:38.297135017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9774cf66-vzhj6,Uid:64dc643b-9fc0-4c8d-af95-cff808284f39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e8205ef495216e80978661f38351a82081577d53921b12c0a6a486bae1b5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:38.297592 kubelet[3171]: E0514 18:14:38.297565 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e8205ef495216e80978661f38351a82081577d53921b12c0a6a486bae1b5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:38.297812 kubelet[3171]: E0514 18:14:38.297613 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e8205ef495216e80978661f38351a82081577d53921b12c0a6a486bae1b5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" May 14 18:14:38.297812 kubelet[3171]: E0514 18:14:38.297632 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e8205ef495216e80978661f38351a82081577d53921b12c0a6a486bae1b5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" May 14 18:14:38.298066 kubelet[3171]: E0514 18:14:38.297668 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f9774cf66-vzhj6_calico-system(64dc643b-9fc0-4c8d-af95-cff808284f39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f9774cf66-vzhj6_calico-system(64dc643b-9fc0-4c8d-af95-cff808284f39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93e8205ef495216e80978661f38351a82081577d53921b12c0a6a486bae1b5a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" podUID="64dc643b-9fc0-4c8d-af95-cff808284f39" May 14 18:14:38.366539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320400969.mount: Deactivated successfully. May 14 18:14:38.585926 containerd[1724]: time="2025-05-14T18:14:38.585887224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:38.628357 containerd[1724]: time="2025-05-14T18:14:38.628328118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 18:14:38.674879 containerd[1724]: time="2025-05-14T18:14:38.674823189Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:38.723300 containerd[1724]: time="2025-05-14T18:14:38.723251043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:38.723803 containerd[1724]: time="2025-05-14T18:14:38.723783871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 12.405404069s" May 14 18:14:38.723925 containerd[1724]: time="2025-05-14T18:14:38.723859517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 18:14:38.734423 containerd[1724]: time="2025-05-14T18:14:38.734401469Z" level=info msg="CreateContainer within sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:14:39.072222 containerd[1724]: time="2025-05-14T18:14:39.072196728Z" level=info msg="Container a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:39.187220 containerd[1724]: time="2025-05-14T18:14:39.186309601Z" level=info msg="CreateContainer within sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\"" May 14 18:14:39.193873 containerd[1724]: time="2025-05-14T18:14:39.193848479Z" level=info msg="StartContainer for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\"" May 14 18:14:39.196373 containerd[1724]: time="2025-05-14T18:14:39.196343787Z" level=info msg="connecting to shim a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" address="unix:///run/containerd/s/7eb71ceb47e07959424c885d7813668b4a828b621f4f3061a34177a448c3272b" protocol=ttrpc version=3 May 14 18:14:39.198138 containerd[1724]: time="2025-05-14T18:14:39.198119272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ffz6l,Uid:7d98339e-8cf1-447f-bf29-3e0a7594a179,Namespace:kube-system,Attempt:0,}" May 14 18:14:39.202773 containerd[1724]: time="2025-05-14T18:14:39.201831124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p469r,Uid:5e55bc54-125f-45b4-a2a9-22003bd5a2c5,Namespace:calico-system,Attempt:0,}" May 14 18:14:39.223836 systemd[1]: Started cri-containerd-a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35.scope - libcontainer container a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35. May 14 18:14:39.286971 containerd[1724]: time="2025-05-14T18:14:39.286930131Z" level=info msg="StartContainer for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" returns successfully" May 14 18:14:39.389341 kubelet[3171]: I0514 18:14:39.389188 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wztbc" podStartSLOduration=2.347668038 podStartE2EDuration="1m1.389172789s" podCreationTimestamp="2025-05-14 18:13:38 +0000 UTC" firstStartedPulling="2025-05-14 18:13:39.682815074 +0000 UTC m=+18.563930495" lastFinishedPulling="2025-05-14 18:14:38.724319811 +0000 UTC m=+77.605435246" observedRunningTime="2025-05-14 18:14:39.388967367 +0000 UTC m=+78.270082799" watchObservedRunningTime="2025-05-14 18:14:39.389172789 +0000 UTC m=+78.270288237" May 14 18:14:39.432704 containerd[1724]: time="2025-05-14T18:14:39.431324170Z" level=error msg="Failed to destroy network for sandbox \"f9aac00c9bd9001dc4cd7ad99b0e54a618d378538397c0036eab40653567cccc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:39.434946 systemd[1]: run-netns-cni\x2deb9b2fd6\x2ddbba\x2d4470\x2dbe86\x2d7ce4fd565c86.mount: Deactivated successfully. May 14 18:14:39.437226 containerd[1724]: time="2025-05-14T18:14:39.437126258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ffz6l,Uid:7d98339e-8cf1-447f-bf29-3e0a7594a179,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9aac00c9bd9001dc4cd7ad99b0e54a618d378538397c0036eab40653567cccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:39.437852 kubelet[3171]: E0514 18:14:39.437733 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9aac00c9bd9001dc4cd7ad99b0e54a618d378538397c0036eab40653567cccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:39.438303 kubelet[3171]: E0514 18:14:39.437950 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9aac00c9bd9001dc4cd7ad99b0e54a618d378538397c0036eab40653567cccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ffz6l" May 14 18:14:39.438303 kubelet[3171]: E0514 18:14:39.438054 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9aac00c9bd9001dc4cd7ad99b0e54a618d378538397c0036eab40653567cccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ffz6l" May 14 18:14:39.438303 kubelet[3171]: E0514 18:14:39.438144 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ffz6l_kube-system(7d98339e-8cf1-447f-bf29-3e0a7594a179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ffz6l_kube-system(7d98339e-8cf1-447f-bf29-3e0a7594a179)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9aac00c9bd9001dc4cd7ad99b0e54a618d378538397c0036eab40653567cccc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ffz6l" podUID="7d98339e-8cf1-447f-bf29-3e0a7594a179" May 14 18:14:39.465610 containerd[1724]: time="2025-05-14T18:14:39.465526902Z" level=error msg="Failed to destroy network for sandbox \"c96dda80909aaa1f41bb591cc9452339f41a5811b96571cb421046d862c25955\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:39.480175 containerd[1724]: time="2025-05-14T18:14:39.480142112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p469r,Uid:5e55bc54-125f-45b4-a2a9-22003bd5a2c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96dda80909aaa1f41bb591cc9452339f41a5811b96571cb421046d862c25955\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:39.480410 kubelet[3171]: E0514 18:14:39.480381 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96dda80909aaa1f41bb591cc9452339f41a5811b96571cb421046d862c25955\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:39.480516 kubelet[3171]: E0514 18:14:39.480499 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96dda80909aaa1f41bb591cc9452339f41a5811b96571cb421046d862c25955\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p469r" May 14 18:14:39.480576 kubelet[3171]: E0514 18:14:39.480546 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96dda80909aaa1f41bb591cc9452339f41a5811b96571cb421046d862c25955\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p469r" May 14 18:14:39.480598 kubelet[3171]: E0514 18:14:39.480581 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p469r_calico-system(5e55bc54-125f-45b4-a2a9-22003bd5a2c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p469r_calico-system(5e55bc54-125f-45b4-a2a9-22003bd5a2c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c96dda80909aaa1f41bb591cc9452339f41a5811b96571cb421046d862c25955\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:40.188518 containerd[1724]: time="2025-05-14T18:14:40.188477041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-77j92,Uid:ab7f552a-1fe4-440b-ae6d-7b449d960ef9,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:40.197300 systemd[1]: run-netns-cni\x2d1ce77306\x2dcee1\x2d64ad\x2d1688\x2d14a74cb7562a.mount: Deactivated successfully. May 14 18:14:40.228980 containerd[1724]: time="2025-05-14T18:14:40.228945113Z" level=error msg="Failed to destroy network for sandbox \"64bb717b965f66a93f19e6402f9b51415d017753a1bbae2214108c942d559f22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:40.231349 systemd[1]: run-netns-cni\x2d3b112791\x2d70fa\x2d2734\x2d16c7\x2d0c6ee9d1664a.mount: Deactivated successfully. May 14 18:14:40.232421 containerd[1724]: time="2025-05-14T18:14:40.232393345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-77j92,Uid:ab7f552a-1fe4-440b-ae6d-7b449d960ef9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bb717b965f66a93f19e6402f9b51415d017753a1bbae2214108c942d559f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:40.233240 kubelet[3171]: E0514 18:14:40.232574 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bb717b965f66a93f19e6402f9b51415d017753a1bbae2214108c942d559f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:40.233240 kubelet[3171]: E0514 18:14:40.232616 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bb717b965f66a93f19e6402f9b51415d017753a1bbae2214108c942d559f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" May 14 18:14:40.233240 kubelet[3171]: E0514 18:14:40.232634 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bb717b965f66a93f19e6402f9b51415d017753a1bbae2214108c942d559f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" May 14 18:14:40.233338 kubelet[3171]: E0514 18:14:40.232671 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c49d767b-77j92_calico-apiserver(ab7f552a-1fe4-440b-ae6d-7b449d960ef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c49d767b-77j92_calico-apiserver(ab7f552a-1fe4-440b-ae6d-7b449d960ef9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64bb717b965f66a93f19e6402f9b51415d017753a1bbae2214108c942d559f22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" podUID="ab7f552a-1fe4-440b-ae6d-7b449d960ef9" May 14 18:14:40.832566 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 18:14:40.832639 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 18:14:40.835644 containerd[1724]: time="2025-05-14T18:14:40.835547578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" id:\"607bf6ab0bdd379d0897e0dc899ccb7a508cbf7b279e4029f8e49fdccce4a303\" pid:4259 exit_status:1 exited_at:{seconds:1747246480 nanos:835285764}" May 14 18:14:40.859626 systemd[1]: cri-containerd-a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35.scope: Deactivated successfully. May 14 18:14:40.860803 containerd[1724]: time="2025-05-14T18:14:40.860006983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" id:\"eed013bb2ad76d9326ecfab7ae9beffd684317cae07dd0aa89f0c359d140fdbb\" pid:4344 exit_status:137 exited_at:{seconds:1747246480 nanos:859381487}" May 14 18:14:40.862668 containerd[1724]: time="2025-05-14T18:14:40.862548922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" pid:4188 exit_status:1 exited_at:{seconds:1747246480 nanos:862208168}" May 14 18:14:40.862978 containerd[1724]: time="2025-05-14T18:14:40.862948040Z" level=info msg="received exit event container_id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" pid:4188 exit_status:1 exited_at:{seconds:1747246480 nanos:862208168}" May 14 18:14:40.885602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35-rootfs.mount: Deactivated successfully. May 14 18:14:42.188423 containerd[1724]: time="2025-05-14T18:14:42.188386387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-49pnq,Uid:42f0af52-9025-4d2d-8452-e97d6dcfc33e,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:42.188811 containerd[1724]: time="2025-05-14T18:14:42.188386365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-649rn,Uid:117ed482-5925-4d0e-a8c6-540f64673e00,Namespace:kube-system,Attempt:0,}" May 14 18:14:43.357618 containerd[1724]: time="2025-05-14T18:14:43.357563554Z" level=error msg="get state for a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" error="context deadline exceeded" May 14 18:14:43.357618 containerd[1724]: time="2025-05-14T18:14:43.357610384Z" level=warning msg="unknown status" status=0 May 14 18:14:43.358330 containerd[1724]: time="2025-05-14T18:14:43.357840421Z" level=error msg="get state for a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" error="context deadline exceeded" May 14 18:14:43.358330 containerd[1724]: time="2025-05-14T18:14:43.357863138Z" level=warning msg="unknown status" status=0 May 14 18:14:46.356395 containerd[1724]: time="2025-05-14T18:14:46.356363025Z" level=info msg="StopContainer for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" with timeout 2 (s)" May 14 18:14:48.356878 containerd[1724]: time="2025-05-14T18:14:48.356791149Z" level=error msg="get state for a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" error="context deadline exceeded" May 14 18:14:48.356878 containerd[1724]: time="2025-05-14T18:14:48.356860680Z" level=warning msg="unknown status" status=0 May 14 18:14:48.357333 containerd[1724]: time="2025-05-14T18:14:48.356898595Z" level=info msg="Stop container \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" with signal terminated" May 14 18:14:50.757197 containerd[1724]: time="2025-05-14T18:14:50.757154648Z" level=error msg="Failed to destroy network for sandbox \"ddf2e290792e4dfb4d92511ad8bfb0812f3c04397d0d177c882a3100d04bee3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:50.759084 systemd[1]: run-netns-cni\x2dfd210ccb\x2d8341\x2d2aad\x2d1335\x2d3d6a748df8d9.mount: Deactivated successfully. May 14 18:14:50.863332 containerd[1724]: time="2025-05-14T18:14:50.863294150Z" level=error msg="failed to handle container TaskExit event container_id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" pid:4188 exit_status:1 exited_at:{seconds:1747246480 nanos:862208168}" error="failed to stop container: failed to delete task: context deadline exceeded" May 14 18:14:52.342747 containerd[1724]: time="2025-05-14T18:14:52.342702361Z" level=info msg="TaskExit event container_id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" id:\"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" pid:4188 exit_status:1 exited_at:{seconds:1747246480 nanos:862208168}" May 14 18:14:53.189018 containerd[1724]: time="2025-05-14T18:14:53.188966805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ffz6l,Uid:7d98339e-8cf1-447f-bf29-3e0a7594a179,Namespace:kube-system,Attempt:0,}" May 14 18:14:53.189243 containerd[1724]: time="2025-05-14T18:14:53.189145896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p469r,Uid:5e55bc54-125f-45b4-a2a9-22003bd5a2c5,Namespace:calico-system,Attempt:0,}" May 14 18:14:53.189940 containerd[1724]: time="2025-05-14T18:14:53.189913924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9774cf66-vzhj6,Uid:64dc643b-9fc0-4c8d-af95-cff808284f39,Namespace:calico-system,Attempt:0,}" May 14 18:14:53.958333 containerd[1724]: time="2025-05-14T18:14:53.958302389Z" level=error msg="Failed to destroy network for sandbox \"df8e6344abc106a20dbf4683f6b4a4bda5c91f71a8c3d4fa3560e361dc814aaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:53.960313 systemd[1]: run-netns-cni\x2d1494a378\x2dfa1d\x2d26ba\x2df705\x2d9e90743fd343.mount: Deactivated successfully. May 14 18:14:54.188488 containerd[1724]: time="2025-05-14T18:14:54.188452521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-77j92,Uid:ab7f552a-1fe4-440b-ae6d-7b449d960ef9,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:54.343889 containerd[1724]: time="2025-05-14T18:14:54.343847893Z" level=error msg="get state for a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" error="context deadline exceeded" May 14 18:14:54.343889 containerd[1724]: time="2025-05-14T18:14:54.343887481Z" level=warning msg="unknown status" status=0 May 14 18:14:55.571538 containerd[1724]: time="2025-05-14T18:14:55.571014784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-649rn,Uid:117ed482-5925-4d0e-a8c6-540f64673e00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf2e290792e4dfb4d92511ad8bfb0812f3c04397d0d177c882a3100d04bee3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:55.573483 kubelet[3171]: E0514 18:14:55.571418 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf2e290792e4dfb4d92511ad8bfb0812f3c04397d0d177c882a3100d04bee3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:55.573483 kubelet[3171]: E0514 18:14:55.571499 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf2e290792e4dfb4d92511ad8bfb0812f3c04397d0d177c882a3100d04bee3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-649rn" May 14 18:14:55.573483 kubelet[3171]: E0514 18:14:55.571541 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf2e290792e4dfb4d92511ad8bfb0812f3c04397d0d177c882a3100d04bee3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-649rn" May 14 18:14:55.573880 kubelet[3171]: E0514 18:14:55.571588 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-649rn_kube-system(117ed482-5925-4d0e-a8c6-540f64673e00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-649rn_kube-system(117ed482-5925-4d0e-a8c6-540f64673e00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddf2e290792e4dfb4d92511ad8bfb0812f3c04397d0d177c882a3100d04bee3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-649rn" podUID="117ed482-5925-4d0e-a8c6-540f64673e00" May 14 18:14:56.345713 containerd[1724]: time="2025-05-14T18:14:56.345645650Z" level=error msg="get state for a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" error="context deadline exceeded" May 14 18:14:56.345713 containerd[1724]: time="2025-05-14T18:14:56.345707384Z" level=warning msg="unknown status" status=0 May 14 18:14:58.274068 containerd[1724]: time="2025-05-14T18:14:58.273959468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-49pnq,Uid:42f0af52-9025-4d2d-8452-e97d6dcfc33e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df8e6344abc106a20dbf4683f6b4a4bda5c91f71a8c3d4fa3560e361dc814aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.274406 kubelet[3171]: E0514 18:14:58.274228 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df8e6344abc106a20dbf4683f6b4a4bda5c91f71a8c3d4fa3560e361dc814aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.274406 kubelet[3171]: E0514 18:14:58.274293 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df8e6344abc106a20dbf4683f6b4a4bda5c91f71a8c3d4fa3560e361dc814aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" May 14 18:14:58.274406 kubelet[3171]: E0514 18:14:58.274313 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df8e6344abc106a20dbf4683f6b4a4bda5c91f71a8c3d4fa3560e361dc814aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" May 14 18:14:58.274609 kubelet[3171]: E0514 18:14:58.274374 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c49d767b-49pnq_calico-apiserver(42f0af52-9025-4d2d-8452-e97d6dcfc33e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c49d767b-49pnq_calico-apiserver(42f0af52-9025-4d2d-8452-e97d6dcfc33e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df8e6344abc106a20dbf4683f6b4a4bda5c91f71a8c3d4fa3560e361dc814aaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" podUID="42f0af52-9025-4d2d-8452-e97d6dcfc33e" May 14 18:14:58.275299 containerd[1724]: time="2025-05-14T18:14:58.274997430Z" level=error msg="ttrpc: received message on inactive stream" stream=91 May 14 18:14:58.275299 containerd[1724]: time="2025-05-14T18:14:58.275187292Z" level=error msg="ttrpc: received message on inactive stream" stream=95 May 14 18:14:58.275299 containerd[1724]: time="2025-05-14T18:14:58.275199341Z" level=error msg="ttrpc: received message on inactive stream" stream=97 May 14 18:14:58.275299 containerd[1724]: time="2025-05-14T18:14:58.275224644Z" level=error msg="ttrpc: received message on inactive stream" stream=103 May 14 18:14:58.275299 containerd[1724]: time="2025-05-14T18:14:58.275233317Z" level=error msg="ttrpc: received message on inactive stream" stream=109 May 14 18:14:58.275299 containerd[1724]: time="2025-05-14T18:14:58.275239339Z" level=error msg="ttrpc: received message on inactive stream" stream=111 May 14 18:14:58.276435 containerd[1724]: time="2025-05-14T18:14:58.276036679Z" level=error msg="ExecSync for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"bfbe8b9ad3f638a0a0d5cb9325ef37a1e110ff9291c265c46e9d5332aead0d36\": cannot exec in a deleted state" May 14 18:14:58.276435 containerd[1724]: time="2025-05-14T18:14:58.276130399Z" level=error msg="ExecSync for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"5126ba6266398bedf5ca9cc7bbf8539757939891e72470cceb04eb41cb863bce\": cannot exec in a deleted state" May 14 18:14:58.276522 kubelet[3171]: E0514 18:14:58.276241 3171 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"5126ba6266398bedf5ca9cc7bbf8539757939891e72470cceb04eb41cb863bce\": cannot exec in a deleted state" containerID="a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" cmd=["/bin/calico-node","-shutdown"] May 14 18:14:58.276522 kubelet[3171]: E0514 18:14:58.276291 3171 kuberuntime_container.go:692] "PreStop hook failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"5126ba6266398bedf5ca9cc7bbf8539757939891e72470cceb04eb41cb863bce\": cannot exec in a deleted state" pod="calico-system/calico-node-wztbc" podUID="5a141316-5dcb-47ca-9c32-2e73ecdeb347" containerName="calico-node" containerID="containerd://a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" May 14 18:14:58.276522 kubelet[3171]: E0514 18:14:58.276326 3171 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"bfbe8b9ad3f638a0a0d5cb9325ef37a1e110ff9291c265c46e9d5332aead0d36\": cannot exec in a deleted state" containerID="a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 14 18:14:58.277303 containerd[1724]: time="2025-05-14T18:14:58.277176231Z" level=info msg="Ensure that container a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35 in task-service has been cleanup successfully" May 14 18:14:58.277361 containerd[1724]: time="2025-05-14T18:14:58.277316711Z" level=error msg="ExecSync for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35 not found" May 14 18:14:58.277482 kubelet[3171]: E0514 18:14:58.277457 3171 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35 not found" containerID="a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 14 18:14:58.373640 containerd[1724]: time="2025-05-14T18:14:58.372587287Z" level=error msg="ExecSync for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 14 18:14:58.373869 kubelet[3171]: E0514 18:14:58.373365 3171 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 14 18:14:58.374623 containerd[1724]: time="2025-05-14T18:14:58.374526858Z" level=info msg="StopContainer for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" returns successfully" May 14 18:14:58.375520 containerd[1724]: time="2025-05-14T18:14:58.375471655Z" level=info msg="StopPodSandbox for \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\"" May 14 18:14:58.375568 containerd[1724]: time="2025-05-14T18:14:58.375524543Z" level=info msg="Container to stop \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:14:58.375568 containerd[1724]: time="2025-05-14T18:14:58.375535837Z" level=info msg="Container to stop \"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:14:58.375568 containerd[1724]: time="2025-05-14T18:14:58.375544117Z" level=info msg="Container to stop \"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:14:58.380959 systemd[1]: cri-containerd-c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c.scope: Deactivated successfully. May 14 18:14:58.382514 containerd[1724]: time="2025-05-14T18:14:58.382494763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" id:\"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" pid:3714 exit_status:137 exited_at:{seconds:1747246498 nanos:381658187}" May 14 18:14:58.402271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c-rootfs.mount: Deactivated successfully. May 14 18:14:58.576560 containerd[1724]: time="2025-05-14T18:14:58.575982664Z" level=info msg="shim disconnected" id=c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c namespace=k8s.io May 14 18:14:58.576560 containerd[1724]: time="2025-05-14T18:14:58.576001013Z" level=warning msg="cleaning up after shim disconnected" id=c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c namespace=k8s.io May 14 18:14:58.576560 containerd[1724]: time="2025-05-14T18:14:58.576007953Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:14:58.586529 containerd[1724]: time="2025-05-14T18:14:58.586495781Z" level=info msg="received exit event sandbox_id:\"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" exit_status:137 exited_at:{seconds:1747246498 nanos:381658187}" May 14 18:14:58.588105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c-shm.mount: Deactivated successfully. May 14 18:14:58.588729 containerd[1724]: time="2025-05-14T18:14:58.588665934Z" level=info msg="TearDown network for sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" successfully" May 14 18:14:58.588804 containerd[1724]: time="2025-05-14T18:14:58.588792689Z" level=info msg="StopPodSandbox for \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" returns successfully" May 14 18:14:58.604377 containerd[1724]: time="2025-05-14T18:14:58.604353700Z" level=error msg="Failed to destroy network for sandbox \"0bc6d2566be4f263d3b84db1295c87042c5e19dbde6c1d078e43488cde2dfa86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.629421 kubelet[3171]: I0514 18:14:58.629384 3171 memory_manager.go:355] "RemoveStaleState removing state" podUID="5a141316-5dcb-47ca-9c32-2e73ecdeb347" containerName="calico-node" May 14 18:14:58.636581 systemd[1]: Created slice kubepods-besteffort-pod58e640aa_1a34_4a33_80b7_b1f773b5f576.slice - libcontainer container kubepods-besteffort-pod58e640aa_1a34_4a33_80b7_b1f773b5f576.slice. May 14 18:14:58.649002 kubelet[3171]: I0514 18:14:58.648984 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5a141316-5dcb-47ca-9c32-2e73ecdeb347-node-certs\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649059 kubelet[3171]: I0514 18:14:58.649041 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvvfv\" (UniqueName: \"kubernetes.io/projected/5a141316-5dcb-47ca-9c32-2e73ecdeb347-kube-api-access-nvvfv\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649084 kubelet[3171]: I0514 18:14:58.649061 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-var-lib-calico\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649103 kubelet[3171]: I0514 18:14:58.649089 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-xtables-lock\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649123 kubelet[3171]: I0514 18:14:58.649103 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-net-dir\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649147 kubelet[3171]: I0514 18:14:58.649124 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-log-dir\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649147 kubelet[3171]: I0514 18:14:58.649140 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-lib-modules\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649285 kubelet[3171]: I0514 18:14:58.649249 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.649346 kubelet[3171]: I0514 18:14:58.649337 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-flexvol-driver-host\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649633 kubelet[3171]: I0514 18:14:58.649386 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-policysync\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649633 kubelet[3171]: I0514 18:14:58.649404 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-bin-dir\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649633 kubelet[3171]: I0514 18:14:58.649417 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-var-run-calico\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649633 kubelet[3171]: I0514 18:14:58.649438 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a141316-5dcb-47ca-9c32-2e73ecdeb347-tigera-ca-bundle\") pod \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\" (UID: \"5a141316-5dcb-47ca-9c32-2e73ecdeb347\") " May 14 18:14:58.649633 kubelet[3171]: I0514 18:14:58.649481 3171 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-xtables-lock\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.650916 kubelet[3171]: I0514 18:14:58.650896 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.651003 kubelet[3171]: I0514 18:14:58.650899 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.651054 kubelet[3171]: I0514 18:14:58.651045 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.651090 kubelet[3171]: I0514 18:14:58.651084 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.651129 kubelet[3171]: I0514 18:14:58.651122 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.651165 kubelet[3171]: I0514 18:14:58.651159 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-policysync" (OuterVolumeSpecName: "policysync") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.651206 kubelet[3171]: I0514 18:14:58.651200 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.651245 kubelet[3171]: I0514 18:14:58.651239 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:14:58.671092 kubelet[3171]: I0514 18:14:58.671043 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a141316-5dcb-47ca-9c32-2e73ecdeb347-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 18:14:58.672743 kubelet[3171]: I0514 18:14:58.672704 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a141316-5dcb-47ca-9c32-2e73ecdeb347-kube-api-access-nvvfv" (OuterVolumeSpecName: "kube-api-access-nvvfv") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "kube-api-access-nvvfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 18:14:58.672929 kubelet[3171]: I0514 18:14:58.672913 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a141316-5dcb-47ca-9c32-2e73ecdeb347-node-certs" (OuterVolumeSpecName: "node-certs") pod "5a141316-5dcb-47ca-9c32-2e73ecdeb347" (UID: "5a141316-5dcb-47ca-9c32-2e73ecdeb347"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 18:14:58.711857 containerd[1724]: time="2025-05-14T18:14:58.711824357Z" level=error msg="Failed to destroy network for sandbox \"e982d8cc518d733454b59421aba625be5f6e01cfca1264e521d32216329707f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.749951 kubelet[3171]: I0514 18:14:58.749929 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/58e640aa-1a34-4a33-80b7-b1f773b5f576-node-certs\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750009 kubelet[3171]: I0514 18:14:58.749966 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-xtables-lock\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750009 kubelet[3171]: I0514 18:14:58.749981 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58e640aa-1a34-4a33-80b7-b1f773b5f576-tigera-ca-bundle\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750009 kubelet[3171]: I0514 18:14:58.749997 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-policysync\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750072 kubelet[3171]: I0514 18:14:58.750015 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-cni-bin-dir\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750072 kubelet[3171]: I0514 18:14:58.750031 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-cni-net-dir\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750072 kubelet[3171]: I0514 18:14:58.750048 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-flexvol-driver-host\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750072 kubelet[3171]: I0514 18:14:58.750069 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-lib-modules\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750147 kubelet[3171]: I0514 18:14:58.750085 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hpj\" (UniqueName: \"kubernetes.io/projected/58e640aa-1a34-4a33-80b7-b1f773b5f576-kube-api-access-46hpj\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750147 kubelet[3171]: I0514 18:14:58.750102 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-var-run-calico\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750147 kubelet[3171]: I0514 18:14:58.750117 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-var-lib-calico\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750147 kubelet[3171]: I0514 18:14:58.750132 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/58e640aa-1a34-4a33-80b7-b1f773b5f576-cni-log-dir\") pod \"calico-node-jswcs\" (UID: \"58e640aa-1a34-4a33-80b7-b1f773b5f576\") " pod="calico-system/calico-node-jswcs" May 14 18:14:58.750221 kubelet[3171]: I0514 18:14:58.750152 3171 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-var-lib-calico\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750221 kubelet[3171]: I0514 18:14:58.750162 3171 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5a141316-5dcb-47ca-9c32-2e73ecdeb347-node-certs\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750221 kubelet[3171]: I0514 18:14:58.750171 3171 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nvvfv\" (UniqueName: \"kubernetes.io/projected/5a141316-5dcb-47ca-9c32-2e73ecdeb347-kube-api-access-nvvfv\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750221 kubelet[3171]: I0514 18:14:58.750178 3171 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-net-dir\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750221 kubelet[3171]: I0514 18:14:58.750187 3171 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-log-dir\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750221 kubelet[3171]: I0514 18:14:58.750195 3171 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-lib-modules\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750221 kubelet[3171]: I0514 18:14:58.750203 3171 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-flexvol-driver-host\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750221 kubelet[3171]: I0514 18:14:58.750210 3171 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-policysync\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750365 kubelet[3171]: I0514 18:14:58.750218 3171 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-cni-bin-dir\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750365 kubelet[3171]: I0514 18:14:58.750225 3171 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5a141316-5dcb-47ca-9c32-2e73ecdeb347-var-run-calico\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.750365 kubelet[3171]: I0514 18:14:58.750233 3171 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a141316-5dcb-47ca-9c32-2e73ecdeb347-tigera-ca-bundle\") on node \"ci-4334.0.0-a-ef358d086b\" DevicePath \"\"" May 14 18:14:58.752289 containerd[1724]: time="2025-05-14T18:14:58.752258251Z" level=error msg="Failed to destroy network for sandbox \"03f0e02270a4dcfc96e5db48c253bd779d48b2d4c71b09b2fcd8977da6a9b453\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.814060 containerd[1724]: time="2025-05-14T18:14:58.814029483Z" level=error msg="Failed to destroy network for sandbox \"23e3561ffdd64487bc0c12bb8cf3aeb7a67a145b6d26da7ae01be4b9a0aa4c54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.830233 containerd[1724]: time="2025-05-14T18:14:58.830140403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ffz6l,Uid:7d98339e-8cf1-447f-bf29-3e0a7594a179,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc6d2566be4f263d3b84db1295c87042c5e19dbde6c1d078e43488cde2dfa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.830453 kubelet[3171]: E0514 18:14:58.830426 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc6d2566be4f263d3b84db1295c87042c5e19dbde6c1d078e43488cde2dfa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.830663 kubelet[3171]: E0514 18:14:58.830647 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc6d2566be4f263d3b84db1295c87042c5e19dbde6c1d078e43488cde2dfa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ffz6l" May 14 18:14:58.830901 kubelet[3171]: E0514 18:14:58.830710 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc6d2566be4f263d3b84db1295c87042c5e19dbde6c1d078e43488cde2dfa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ffz6l" May 14 18:14:58.830901 kubelet[3171]: E0514 18:14:58.830749 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ffz6l_kube-system(7d98339e-8cf1-447f-bf29-3e0a7594a179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ffz6l_kube-system(7d98339e-8cf1-447f-bf29-3e0a7594a179)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bc6d2566be4f263d3b84db1295c87042c5e19dbde6c1d078e43488cde2dfa86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ffz6l" podUID="7d98339e-8cf1-447f-bf29-3e0a7594a179" May 14 18:14:58.843695 containerd[1724]: time="2025-05-14T18:14:58.843634324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p469r,Uid:5e55bc54-125f-45b4-a2a9-22003bd5a2c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e982d8cc518d733454b59421aba625be5f6e01cfca1264e521d32216329707f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.843808 kubelet[3171]: E0514 18:14:58.843786 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e982d8cc518d733454b59421aba625be5f6e01cfca1264e521d32216329707f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.843842 kubelet[3171]: E0514 18:14:58.843823 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e982d8cc518d733454b59421aba625be5f6e01cfca1264e521d32216329707f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p469r" May 14 18:14:58.843875 kubelet[3171]: E0514 18:14:58.843841 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e982d8cc518d733454b59421aba625be5f6e01cfca1264e521d32216329707f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p469r" May 14 18:14:58.843902 kubelet[3171]: E0514 18:14:58.843880 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p469r_calico-system(5e55bc54-125f-45b4-a2a9-22003bd5a2c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p469r_calico-system(5e55bc54-125f-45b4-a2a9-22003bd5a2c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e982d8cc518d733454b59421aba625be5f6e01cfca1264e521d32216329707f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p469r" podUID="5e55bc54-125f-45b4-a2a9-22003bd5a2c5" May 14 18:14:58.877641 containerd[1724]: time="2025-05-14T18:14:58.877584201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9774cf66-vzhj6,Uid:64dc643b-9fc0-4c8d-af95-cff808284f39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f0e02270a4dcfc96e5db48c253bd779d48b2d4c71b09b2fcd8977da6a9b453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.877759 kubelet[3171]: E0514 18:14:58.877736 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f0e02270a4dcfc96e5db48c253bd779d48b2d4c71b09b2fcd8977da6a9b453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.877801 kubelet[3171]: E0514 18:14:58.877783 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f0e02270a4dcfc96e5db48c253bd779d48b2d4c71b09b2fcd8977da6a9b453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" May 14 18:14:58.877834 kubelet[3171]: E0514 18:14:58.877801 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f0e02270a4dcfc96e5db48c253bd779d48b2d4c71b09b2fcd8977da6a9b453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" May 14 18:14:58.877901 kubelet[3171]: E0514 18:14:58.877847 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f9774cf66-vzhj6_calico-system(64dc643b-9fc0-4c8d-af95-cff808284f39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f9774cf66-vzhj6_calico-system(64dc643b-9fc0-4c8d-af95-cff808284f39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03f0e02270a4dcfc96e5db48c253bd779d48b2d4c71b09b2fcd8977da6a9b453\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" podUID="64dc643b-9fc0-4c8d-af95-cff808284f39" May 14 18:14:58.924868 containerd[1724]: time="2025-05-14T18:14:58.924840681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-77j92,Uid:ab7f552a-1fe4-440b-ae6d-7b449d960ef9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e3561ffdd64487bc0c12bb8cf3aeb7a67a145b6d26da7ae01be4b9a0aa4c54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.925042 kubelet[3171]: E0514 18:14:58.925004 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e3561ffdd64487bc0c12bb8cf3aeb7a67a145b6d26da7ae01be4b9a0aa4c54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:58.925083 kubelet[3171]: E0514 18:14:58.925057 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e3561ffdd64487bc0c12bb8cf3aeb7a67a145b6d26da7ae01be4b9a0aa4c54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" May 14 18:14:58.925083 kubelet[3171]: E0514 18:14:58.925076 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e3561ffdd64487bc0c12bb8cf3aeb7a67a145b6d26da7ae01be4b9a0aa4c54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" May 14 18:14:58.925134 kubelet[3171]: E0514 18:14:58.925106 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c49d767b-77j92_calico-apiserver(ab7f552a-1fe4-440b-ae6d-7b449d960ef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c49d767b-77j92_calico-apiserver(ab7f552a-1fe4-440b-ae6d-7b449d960ef9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23e3561ffdd64487bc0c12bb8cf3aeb7a67a145b6d26da7ae01be4b9a0aa4c54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" podUID="ab7f552a-1fe4-440b-ae6d-7b449d960ef9" May 14 18:14:58.971643 containerd[1724]: time="2025-05-14T18:14:58.971618442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jswcs,Uid:58e640aa-1a34-4a33-80b7-b1f773b5f576,Namespace:calico-system,Attempt:0,}" May 14 18:14:58.979445 systemd[1]: run-netns-cni\x2db568057c\x2d2dbc\x2d9d21\x2da55e\x2d1bdd551ab142.mount: Deactivated successfully. May 14 18:14:58.979600 systemd[1]: run-netns-cni\x2df6bc6d17\x2d5877\x2da7e7\x2d28ad\x2dd727c9c88f1e.mount: Deactivated successfully. May 14 18:14:58.979694 systemd[1]: run-netns-cni\x2d05d59351\x2d9811\x2d28ff\x2dce3e\x2da60dddc1e69e.mount: Deactivated successfully. May 14 18:14:58.979772 systemd[1]: run-netns-cni\x2d52be1dc2\x2d7411\x2dbbff\x2d4ccd\x2de65e41239455.mount: Deactivated successfully. May 14 18:14:58.979843 systemd[1]: var-lib-kubelet-pods-5a141316\x2d5dcb\x2d47ca\x2d9c32\x2d2e73ecdeb347-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 14 18:14:58.979932 systemd[1]: var-lib-kubelet-pods-5a141316\x2d5dcb\x2d47ca\x2d9c32\x2d2e73ecdeb347-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnvvfv.mount: Deactivated successfully. May 14 18:14:58.980014 systemd[1]: var-lib-kubelet-pods-5a141316\x2d5dcb\x2d47ca\x2d9c32\x2d2e73ecdeb347-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 14 18:14:59.192481 systemd[1]: Removed slice kubepods-besteffort-pod5a141316_5dcb_47ca_9c32_2e73ecdeb347.slice - libcontainer container kubepods-besteffort-pod5a141316_5dcb_47ca_9c32_2e73ecdeb347.slice. May 14 18:14:59.193080 systemd[1]: kubepods-besteffort-pod5a141316_5dcb_47ca_9c32_2e73ecdeb347.slice: Consumed 473ms CPU time, 205.3M memory peak, 160.4M written to disk. May 14 18:14:59.279341 containerd[1724]: time="2025-05-14T18:14:59.279299346Z" level=info msg="connecting to shim 0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea" address="unix:///run/containerd/s/801c3bbdd5ec4fe9b9483e63e466c1ecbe7969fbc9aa4f060b3aaa1caee1bdd3" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:59.296816 systemd[1]: Started cri-containerd-0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea.scope - libcontainer container 0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea. May 14 18:14:59.317708 containerd[1724]: time="2025-05-14T18:14:59.317650351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jswcs,Uid:58e640aa-1a34-4a33-80b7-b1f773b5f576,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea\"" May 14 18:14:59.323942 containerd[1724]: time="2025-05-14T18:14:59.323601361Z" level=info msg="CreateContainer within sandbox \"0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:14:59.392332 kubelet[3171]: I0514 18:14:59.392317 3171 scope.go:117] "RemoveContainer" containerID="a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35" May 14 18:14:59.396643 containerd[1724]: time="2025-05-14T18:14:59.396623187Z" level=info msg="RemoveContainer for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\"" May 14 18:14:59.530691 containerd[1724]: time="2025-05-14T18:14:59.530558396Z" level=info msg="Container 821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:59.574901 containerd[1724]: time="2025-05-14T18:14:59.574872350Z" level=info msg="RemoveContainer for \"a8909ed6bf53152d451192200541b7ea3789e73d9a169968392744655e33de35\" returns successfully" May 14 18:14:59.575134 kubelet[3171]: I0514 18:14:59.575106 3171 scope.go:117] "RemoveContainer" containerID="ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328" May 14 18:14:59.577035 containerd[1724]: time="2025-05-14T18:14:59.577015746Z" level=info msg="RemoveContainer for \"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\"" May 14 18:14:59.779666 containerd[1724]: time="2025-05-14T18:14:59.779625130Z" level=info msg="RemoveContainer for \"ad7ac5baccd8fc41efb0719f07028465ce6e77021d566827a041ba0170557328\" returns successfully" May 14 18:14:59.779987 kubelet[3171]: I0514 18:14:59.779812 3171 scope.go:117] "RemoveContainer" containerID="1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd" May 14 18:14:59.780043 containerd[1724]: time="2025-05-14T18:14:59.779926915Z" level=info msg="CreateContainer within sandbox \"0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531\"" May 14 18:14:59.781720 containerd[1724]: time="2025-05-14T18:14:59.781127725Z" level=info msg="StartContainer for \"821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531\"" May 14 18:14:59.782375 containerd[1724]: time="2025-05-14T18:14:59.782357034Z" level=info msg="RemoveContainer for \"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\"" May 14 18:14:59.783175 containerd[1724]: time="2025-05-14T18:14:59.783150726Z" level=info msg="connecting to shim 821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531" address="unix:///run/containerd/s/801c3bbdd5ec4fe9b9483e63e466c1ecbe7969fbc9aa4f060b3aaa1caee1bdd3" protocol=ttrpc version=3 May 14 18:14:59.798807 systemd[1]: Started cri-containerd-821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531.scope - libcontainer container 821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531. May 14 18:14:59.830315 containerd[1724]: time="2025-05-14T18:14:59.830291834Z" level=info msg="StartContainer for \"821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531\" returns successfully" May 14 18:14:59.833988 systemd[1]: cri-containerd-821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531.scope: Deactivated successfully. May 14 18:14:59.834334 systemd[1]: cri-containerd-821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531.scope: Consumed 22ms CPU time, 8.1M memory peak, 6.3M written to disk. May 14 18:14:59.835487 containerd[1724]: time="2025-05-14T18:14:59.835462950Z" level=info msg="received exit event container_id:\"821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531\" id:\"821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531\" pid:4651 exited_at:{seconds:1747246499 nanos:835302642}" May 14 18:14:59.835615 containerd[1724]: time="2025-05-14T18:14:59.835582993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531\" id:\"821708be306e9248876fdeab068339925babee7641ef8af9be9f87167b5d0531\" pid:4651 exited_at:{seconds:1747246499 nanos:835302642}" May 14 18:15:00.474695 containerd[1724]: time="2025-05-14T18:15:00.474640778Z" level=info msg="RemoveContainer for \"1c8cdb5a9bd956103ceb7bf5974ad3f8d93efa018fd08e94d6845f23bb6531bd\" returns successfully" May 14 18:15:01.189299 kubelet[3171]: I0514 18:15:01.189266 3171 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a141316-5dcb-47ca-9c32-2e73ecdeb347" path="/var/lib/kubelet/pods/5a141316-5dcb-47ca-9c32-2e73ecdeb347/volumes" May 14 18:15:01.405926 containerd[1724]: time="2025-05-14T18:15:01.405888452Z" level=info msg="CreateContainer within sandbox \"0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:15:01.521834 containerd[1724]: time="2025-05-14T18:15:01.521806673Z" level=info msg="Container 993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:01.635580 containerd[1724]: time="2025-05-14T18:15:01.635556963Z" level=info msg="CreateContainer within sandbox \"0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21\"" May 14 18:15:01.636376 containerd[1724]: time="2025-05-14T18:15:01.636357328Z" level=info msg="StartContainer for \"993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21\"" May 14 18:15:01.637577 containerd[1724]: time="2025-05-14T18:15:01.637552627Z" level=info msg="connecting to shim 993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21" address="unix:///run/containerd/s/801c3bbdd5ec4fe9b9483e63e466c1ecbe7969fbc9aa4f060b3aaa1caee1bdd3" protocol=ttrpc version=3 May 14 18:15:01.656847 systemd[1]: Started cri-containerd-993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21.scope - libcontainer container 993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21. May 14 18:15:01.692256 containerd[1724]: time="2025-05-14T18:15:01.692203398Z" level=info msg="StartContainer for \"993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21\" returns successfully" May 14 18:15:01.947612 systemd[1]: cri-containerd-993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21.scope: Deactivated successfully. May 14 18:15:01.948864 containerd[1724]: time="2025-05-14T18:15:01.948624608Z" level=info msg="received exit event container_id:\"993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21\" id:\"993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21\" pid:4701 exited_at:{seconds:1747246501 nanos:948374937}" May 14 18:15:01.949074 containerd[1724]: time="2025-05-14T18:15:01.948624501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21\" id:\"993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21\" pid:4701 exited_at:{seconds:1747246501 nanos:948374937}" May 14 18:15:01.962691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-993380d326a2675575234596bfb872b273c82b8cc4f27029019cfa6244a4ae21-rootfs.mount: Deactivated successfully. May 14 18:15:03.424702 containerd[1724]: time="2025-05-14T18:15:03.424639796Z" level=info msg="CreateContainer within sandbox \"0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:15:04.278106 containerd[1724]: time="2025-05-14T18:15:04.277146763Z" level=info msg="Container a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:04.436927 containerd[1724]: time="2025-05-14T18:15:04.436899783Z" level=info msg="CreateContainer within sandbox \"0f345813e7bb09ef4e13393cbed7972e4caadf5212d1967d8a97b41609841cea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\"" May 14 18:15:04.437698 containerd[1724]: time="2025-05-14T18:15:04.437387985Z" level=info msg="StartContainer for \"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\"" May 14 18:15:04.438779 containerd[1724]: time="2025-05-14T18:15:04.438717311Z" level=info msg="connecting to shim a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6" address="unix:///run/containerd/s/801c3bbdd5ec4fe9b9483e63e466c1ecbe7969fbc9aa4f060b3aaa1caee1bdd3" protocol=ttrpc version=3 May 14 18:15:04.456823 systemd[1]: Started cri-containerd-a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6.scope - libcontainer container a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6. May 14 18:15:04.485500 containerd[1724]: time="2025-05-14T18:15:04.485481392Z" level=info msg="StartContainer for \"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" returns successfully" May 14 18:15:05.465300 containerd[1724]: time="2025-05-14T18:15:05.465265709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"c50cc7a5f49fd40caa55b004fb5c368c4bc97c10762fc16984bb7a2123a3ba7b\" pid:4797 exit_status:1 exited_at:{seconds:1747246505 nanos:465097914}" May 14 18:15:06.115380 systemd-networkd[1285]: vxlan.calico: Link UP May 14 18:15:06.115388 systemd-networkd[1285]: vxlan.calico: Gained carrier May 14 18:15:06.467093 containerd[1724]: time="2025-05-14T18:15:06.466889488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"14adff60c4fdb6bda90a86f8f58e5a5486aa70dec59a3669e63bdf9ed491721b\" pid:5003 exit_status:1 exited_at:{seconds:1747246506 nanos:466666084}" May 14 18:15:07.468473 containerd[1724]: time="2025-05-14T18:15:07.468428003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"0ee1b888cb548d06bdb92df8460f748c9af1dfcb6ef0a7cff99b467eefcd021e\" pid:5033 exit_status:1 exited_at:{seconds:1747246507 nanos:468145418}" May 14 18:15:07.579810 systemd-networkd[1285]: vxlan.calico: Gained IPv6LL May 14 18:15:10.188438 containerd[1724]: time="2025-05-14T18:15:10.188397665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-649rn,Uid:117ed482-5925-4d0e-a8c6-540f64673e00,Namespace:kube-system,Attempt:0,}" May 14 18:15:10.303266 systemd-networkd[1285]: califd771d95545: Link UP May 14 18:15:10.303874 systemd-networkd[1285]: califd771d95545: Gained carrier May 14 18:15:10.316060 containerd[1724]: 2025-05-14 18:15:10.252 [INFO][5048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0 coredns-668d6bf9bc- kube-system 117ed482-5925-4d0e-a8c6-540f64673e00 755 0 2025-05-14 18:13:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-ef358d086b coredns-668d6bf9bc-649rn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd771d95545 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-649rn" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-" May 14 18:15:10.316060 containerd[1724]: 2025-05-14 18:15:10.252 [INFO][5048] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-649rn" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" May 14 18:15:10.316060 containerd[1724]: 2025-05-14 18:15:10.272 [INFO][5061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" HandleID="k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Workload="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.278 [INFO][5061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" HandleID="k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Workload="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b750), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-ef358d086b", "pod":"coredns-668d6bf9bc-649rn", "timestamp":"2025-05-14 18:15:10.27222185 +0000 UTC"}, Hostname:"ci-4334.0.0-a-ef358d086b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.278 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.278 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.278 [INFO][5061] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-ef358d086b' May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.280 [INFO][5061] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.283 [INFO][5061] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.286 [INFO][5061] ipam/ipam.go 489: Trying affinity for 192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.287 [INFO][5061] ipam/ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316227 containerd[1724]: 2025-05-14 18:15:10.289 [INFO][5061] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316426 containerd[1724]: 2025-05-14 18:15:10.289 [INFO][5061] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316426 containerd[1724]: 2025-05-14 18:15:10.290 [INFO][5061] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c May 14 18:15:10.316426 containerd[1724]: 2025-05-14 18:15:10.293 [INFO][5061] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316426 containerd[1724]: 2025-05-14 18:15:10.299 [INFO][5061] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.51.193/26] block=192.168.51.192/26 handle="k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316426 containerd[1724]: 2025-05-14 18:15:10.299 [INFO][5061] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.193/26] handle="k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:10.316426 containerd[1724]: 2025-05-14 18:15:10.299 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:10.316426 containerd[1724]: 2025-05-14 18:15:10.299 [INFO][5061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.193/26] IPv6=[] ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" HandleID="k8s-pod-network.ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Workload="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" May 14 18:15:10.316559 containerd[1724]: 2025-05-14 18:15:10.300 [INFO][5048] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-649rn" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"117ed482-5925-4d0e-a8c6-540f64673e00", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"", Pod:"coredns-668d6bf9bc-649rn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd771d95545", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:10.316559 containerd[1724]: 2025-05-14 18:15:10.300 [INFO][5048] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.51.193/32] ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-649rn" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" May 14 18:15:10.316559 containerd[1724]: 2025-05-14 18:15:10.300 [INFO][5048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd771d95545 ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-649rn" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" May 14 18:15:10.316559 containerd[1724]: 2025-05-14 18:15:10.304 [INFO][5048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-649rn" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" May 14 18:15:10.316559 containerd[1724]: 2025-05-14 18:15:10.304 [INFO][5048] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-649rn" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"117ed482-5925-4d0e-a8c6-540f64673e00", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c", Pod:"coredns-668d6bf9bc-649rn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd771d95545", MAC:"ee:38:f2:68:68:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:10.316559 containerd[1724]: 2025-05-14 18:15:10.314 [INFO][5048] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-649rn" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--649rn-eth0" May 14 18:15:10.317591 kubelet[3171]: I0514 18:15:10.317287 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jswcs" podStartSLOduration=12.317181614999999 podStartE2EDuration="12.317181615s" podCreationTimestamp="2025-05-14 18:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:15:05.437751584 +0000 UTC m=+104.318867036" watchObservedRunningTime="2025-05-14 18:15:10.317181615 +0000 UTC m=+109.198297047" May 14 18:15:10.592614 containerd[1724]: time="2025-05-14T18:15:10.592528395Z" level=info msg="connecting to shim ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c" address="unix:///run/containerd/s/08f1fba7d213321cb84b672ad02592fff155bc4a650dd8084bf52826204e46d4" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:10.618953 systemd[1]: Started cri-containerd-ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c.scope - libcontainer container ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c. May 14 18:15:10.668352 containerd[1724]: time="2025-05-14T18:15:10.668331377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-649rn,Uid:117ed482-5925-4d0e-a8c6-540f64673e00,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c\"" May 14 18:15:10.670387 containerd[1724]: time="2025-05-14T18:15:10.670341517Z" level=info msg="CreateContainer within sandbox \"ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:15:10.835873 containerd[1724]: time="2025-05-14T18:15:10.835852002Z" level=info msg="Container 92e6deb18cd226d0c85e438e3c6a37e4b11f45ee10a2fa50fb82d0bf2bd0abde: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:10.974465 containerd[1724]: time="2025-05-14T18:15:10.974419910Z" level=info msg="CreateContainer within sandbox \"ae483548a319ac1313b314f8719290a902cd886313bff441f3a78be984c69e7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92e6deb18cd226d0c85e438e3c6a37e4b11f45ee10a2fa50fb82d0bf2bd0abde\"" May 14 18:15:10.975248 containerd[1724]: time="2025-05-14T18:15:10.974921044Z" level=info msg="StartContainer for \"92e6deb18cd226d0c85e438e3c6a37e4b11f45ee10a2fa50fb82d0bf2bd0abde\"" May 14 18:15:10.975858 containerd[1724]: time="2025-05-14T18:15:10.975835388Z" level=info msg="connecting to shim 92e6deb18cd226d0c85e438e3c6a37e4b11f45ee10a2fa50fb82d0bf2bd0abde" address="unix:///run/containerd/s/08f1fba7d213321cb84b672ad02592fff155bc4a650dd8084bf52826204e46d4" protocol=ttrpc version=3 May 14 18:15:10.993796 systemd[1]: Started cri-containerd-92e6deb18cd226d0c85e438e3c6a37e4b11f45ee10a2fa50fb82d0bf2bd0abde.scope - libcontainer container 92e6deb18cd226d0c85e438e3c6a37e4b11f45ee10a2fa50fb82d0bf2bd0abde. May 14 18:15:11.015435 containerd[1724]: time="2025-05-14T18:15:11.015392666Z" level=info msg="StartContainer for \"92e6deb18cd226d0c85e438e3c6a37e4b11f45ee10a2fa50fb82d0bf2bd0abde\" returns successfully" May 14 18:15:11.188797 containerd[1724]: time="2025-05-14T18:15:11.188772873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9774cf66-vzhj6,Uid:64dc643b-9fc0-4c8d-af95-cff808284f39,Namespace:calico-system,Attempt:0,}" May 14 18:15:11.301406 systemd-networkd[1285]: cali6771ad4cbf6: Link UP May 14 18:15:11.304239 systemd-networkd[1285]: cali6771ad4cbf6: Gained carrier May 14 18:15:11.320182 systemd[1]: Started sshd@7-10.200.8.47:22-10.200.16.10:51634.service - OpenSSH per-connection server daemon (10.200.16.10:51634). May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.253 [INFO][5165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0 calico-kube-controllers-6f9774cf66- calico-system 64dc643b-9fc0-4c8d-af95-cff808284f39 752 0 2025-05-14 18:13:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f9774cf66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4334.0.0-a-ef358d086b calico-kube-controllers-6f9774cf66-vzhj6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6771ad4cbf6 [] []}} ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Namespace="calico-system" Pod="calico-kube-controllers-6f9774cf66-vzhj6" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.253 [INFO][5165] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Namespace="calico-system" Pod="calico-kube-controllers-6f9774cf66-vzhj6" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.272 [INFO][5176] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" HandleID="k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.279 [INFO][5176] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" HandleID="k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b390), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-ef358d086b", "pod":"calico-kube-controllers-6f9774cf66-vzhj6", "timestamp":"2025-05-14 18:15:11.272675056 +0000 UTC"}, Hostname:"ci-4334.0.0-a-ef358d086b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.279 [INFO][5176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.279 [INFO][5176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.279 [INFO][5176] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-ef358d086b' May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.280 [INFO][5176] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.283 [INFO][5176] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.285 [INFO][5176] ipam/ipam.go 489: Trying affinity for 192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.286 [INFO][5176] ipam/ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.288 [INFO][5176] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.288 [INFO][5176] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.289 [INFO][5176] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3 May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.294 [INFO][5176] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.298 [INFO][5176] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.51.194/26] block=192.168.51.192/26 handle="k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.298 [INFO][5176] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.194/26] handle="k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.298 [INFO][5176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:11.327494 containerd[1724]: 2025-05-14 18:15:11.298 [INFO][5176] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.194/26] IPv6=[] ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" HandleID="k8s-pod-network.b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" May 14 18:15:11.328138 containerd[1724]: 2025-05-14 18:15:11.300 [INFO][5165] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Namespace="calico-system" Pod="calico-kube-controllers-6f9774cf66-vzhj6" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0", GenerateName:"calico-kube-controllers-6f9774cf66-", Namespace:"calico-system", SelfLink:"", UID:"64dc643b-9fc0-4c8d-af95-cff808284f39", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9774cf66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"", Pod:"calico-kube-controllers-6f9774cf66-vzhj6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6771ad4cbf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:11.328138 containerd[1724]: 2025-05-14 18:15:11.300 [INFO][5165] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.51.194/32] ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Namespace="calico-system" Pod="calico-kube-controllers-6f9774cf66-vzhj6" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" May 14 18:15:11.328138 containerd[1724]: 2025-05-14 18:15:11.300 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6771ad4cbf6 ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Namespace="calico-system" Pod="calico-kube-controllers-6f9774cf66-vzhj6" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" May 14 18:15:11.328138 containerd[1724]: 2025-05-14 18:15:11.301 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Namespace="calico-system" Pod="calico-kube-controllers-6f9774cf66-vzhj6" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" May 14 18:15:11.328138 containerd[1724]: 2025-05-14 18:15:11.301 [INFO][5165] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Namespace="calico-system" Pod="calico-kube-controllers-6f9774cf66-vzhj6" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0", GenerateName:"calico-kube-controllers-6f9774cf66-", Namespace:"calico-system", SelfLink:"", UID:"64dc643b-9fc0-4c8d-af95-cff808284f39", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9774cf66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3", Pod:"calico-kube-controllers-6f9774cf66-vzhj6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6771ad4cbf6", MAC:"3a:cd:7f:b9:e8:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:11.328138 containerd[1724]: 2025-05-14 18:15:11.323 [INFO][5165] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" Namespace="calico-system" Pod="calico-kube-controllers-6f9774cf66-vzhj6" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--kube--controllers--6f9774cf66--vzhj6-eth0" May 14 18:15:11.443251 kubelet[3171]: I0514 18:15:11.443188 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-649rn" podStartSLOduration=107.443160938 podStartE2EDuration="1m47.443160938s" podCreationTimestamp="2025-05-14 18:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:15:11.442394416 +0000 UTC m=+110.323509849" watchObservedRunningTime="2025-05-14 18:15:11.443160938 +0000 UTC m=+110.324276379" May 14 18:15:11.634734 containerd[1724]: time="2025-05-14T18:15:11.634632173Z" level=info msg="connecting to shim b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3" address="unix:///run/containerd/s/418fc20d237943f6fc404bb1425cd687395daabd7519b8905f78fa30ca919caa" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:11.659820 systemd[1]: Started cri-containerd-b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3.scope - libcontainer container b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3. May 14 18:15:11.695447 containerd[1724]: time="2025-05-14T18:15:11.695391120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9774cf66-vzhj6,Uid:64dc643b-9fc0-4c8d-af95-cff808284f39,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3\"" May 14 18:15:11.696362 containerd[1724]: time="2025-05-14T18:15:11.696339721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 18:15:11.867830 systemd-networkd[1285]: califd771d95545: Gained IPv6LL May 14 18:15:11.964550 sshd[5185]: Accepted publickey for core from 10.200.16.10 port 51634 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:15:11.965560 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:11.969125 systemd-logind[1686]: New session 10 of user core. May 14 18:15:11.973797 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:15:12.189398 containerd[1724]: time="2025-05-14T18:15:12.189373567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-77j92,Uid:ab7f552a-1fe4-440b-ae6d-7b449d960ef9,Namespace:calico-apiserver,Attempt:0,}" May 14 18:15:12.189884 containerd[1724]: time="2025-05-14T18:15:12.189442378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ffz6l,Uid:7d98339e-8cf1-447f-bf29-3e0a7594a179,Namespace:kube-system,Attempt:0,}" May 14 18:15:12.358119 systemd-networkd[1285]: calie060faca28d: Link UP May 14 18:15:12.359902 systemd-networkd[1285]: calie060faca28d: Gained carrier May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.295 [INFO][5258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0 calico-apiserver-76c49d767b- calico-apiserver ab7f552a-1fe4-440b-ae6d-7b449d960ef9 749 0 2025-05-14 18:13:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76c49d767b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334.0.0-a-ef358d086b calico-apiserver-76c49d767b-77j92 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie060faca28d [] []}} ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-77j92" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.295 [INFO][5258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-77j92" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.320 [INFO][5271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" HandleID="k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.329 [INFO][5271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" HandleID="k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334.0.0-a-ef358d086b", "pod":"calico-apiserver-76c49d767b-77j92", "timestamp":"2025-05-14 18:15:12.320554053 +0000 UTC"}, Hostname:"ci-4334.0.0-a-ef358d086b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.329 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.329 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.329 [INFO][5271] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-ef358d086b' May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.331 [INFO][5271] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.334 [INFO][5271] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.339 [INFO][5271] ipam/ipam.go 489: Trying affinity for 192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.340 [INFO][5271] ipam/ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.342 [INFO][5271] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.342 [INFO][5271] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.343 [INFO][5271] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71 May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.348 [INFO][5271] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.354 [INFO][5271] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.51.195/26] block=192.168.51.192/26 handle="k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.354 [INFO][5271] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.195/26] handle="k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.354 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:12.374866 containerd[1724]: 2025-05-14 18:15:12.354 [INFO][5271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.195/26] IPv6=[] ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" HandleID="k8s-pod-network.6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" May 14 18:15:12.375672 containerd[1724]: 2025-05-14 18:15:12.356 [INFO][5258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-77j92" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0", GenerateName:"calico-apiserver-76c49d767b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab7f552a-1fe4-440b-ae6d-7b449d960ef9", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c49d767b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"", Pod:"calico-apiserver-76c49d767b-77j92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie060faca28d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:12.375672 containerd[1724]: 2025-05-14 18:15:12.356 [INFO][5258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.51.195/32] ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-77j92" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" May 14 18:15:12.375672 containerd[1724]: 2025-05-14 18:15:12.356 [INFO][5258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie060faca28d ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-77j92" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" May 14 18:15:12.375672 containerd[1724]: 2025-05-14 18:15:12.358 [INFO][5258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-77j92" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" May 14 18:15:12.375672 containerd[1724]: 2025-05-14 18:15:12.359 [INFO][5258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-77j92" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0", GenerateName:"calico-apiserver-76c49d767b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab7f552a-1fe4-440b-ae6d-7b449d960ef9", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c49d767b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71", Pod:"calico-apiserver-76c49d767b-77j92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie060faca28d", MAC:"b6:e1:5c:30:43:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:12.375672 containerd[1724]: 2025-05-14 18:15:12.373 [INFO][5258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-77j92" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--77j92-eth0" May 14 18:15:12.457330 systemd-networkd[1285]: cali467c8ce0282: Link UP May 14 18:15:12.457435 systemd-networkd[1285]: cali467c8ce0282: Gained carrier May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.412 [INFO][5297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0 coredns-668d6bf9bc- kube-system 7d98339e-8cf1-447f-bf29-3e0a7594a179 754 0 2025-05-14 18:13:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-ef358d086b coredns-668d6bf9bc-ffz6l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali467c8ce0282 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Namespace="kube-system" Pod="coredns-668d6bf9bc-ffz6l" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.412 [INFO][5297] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Namespace="kube-system" Pod="coredns-668d6bf9bc-ffz6l" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.428 [INFO][5311] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" HandleID="k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Workload="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.435 [INFO][5311] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" HandleID="k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Workload="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-ef358d086b", "pod":"coredns-668d6bf9bc-ffz6l", "timestamp":"2025-05-14 18:15:12.428533639 +0000 UTC"}, Hostname:"ci-4334.0.0-a-ef358d086b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.435 [INFO][5311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.435 [INFO][5311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.435 [INFO][5311] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-ef358d086b' May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.436 [INFO][5311] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.439 [INFO][5311] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.441 [INFO][5311] ipam/ipam.go 489: Trying affinity for 192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.443 [INFO][5311] ipam/ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.444 [INFO][5311] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.444 [INFO][5311] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.445 [INFO][5311] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.448 [INFO][5311] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.454 [INFO][5311] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.51.196/26] block=192.168.51.192/26 handle="k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.454 [INFO][5311] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.196/26] handle="k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.454 [INFO][5311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:12.470197 containerd[1724]: 2025-05-14 18:15:12.454 [INFO][5311] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.196/26] IPv6=[] ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" HandleID="k8s-pod-network.ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Workload="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" May 14 18:15:12.470644 containerd[1724]: 2025-05-14 18:15:12.455 [INFO][5297] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Namespace="kube-system" Pod="coredns-668d6bf9bc-ffz6l" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d98339e-8cf1-447f-bf29-3e0a7594a179", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"", Pod:"coredns-668d6bf9bc-ffz6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali467c8ce0282", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:12.470644 containerd[1724]: 2025-05-14 18:15:12.455 [INFO][5297] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.51.196/32] ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Namespace="kube-system" Pod="coredns-668d6bf9bc-ffz6l" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" May 14 18:15:12.470644 containerd[1724]: 2025-05-14 18:15:12.455 [INFO][5297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali467c8ce0282 ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Namespace="kube-system" Pod="coredns-668d6bf9bc-ffz6l" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" May 14 18:15:12.470644 containerd[1724]: 2025-05-14 18:15:12.456 [INFO][5297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Namespace="kube-system" Pod="coredns-668d6bf9bc-ffz6l" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" May 14 18:15:12.470644 containerd[1724]: 2025-05-14 18:15:12.457 [INFO][5297] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Namespace="kube-system" Pod="coredns-668d6bf9bc-ffz6l" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d98339e-8cf1-447f-bf29-3e0a7594a179", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf", Pod:"coredns-668d6bf9bc-ffz6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali467c8ce0282", MAC:"f6:b5:27:54:d2:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:12.470644 containerd[1724]: 2025-05-14 18:15:12.469 [INFO][5297] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" Namespace="kube-system" Pod="coredns-668d6bf9bc-ffz6l" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-coredns--668d6bf9bc--ffz6l-eth0" May 14 18:15:12.505656 sshd[5256]: Connection closed by 10.200.16.10 port 51634 May 14 18:15:12.505995 sshd-session[5185]: pam_unix(sshd:session): session closed for user core May 14 18:15:12.508051 systemd[1]: sshd@7-10.200.8.47:22-10.200.16.10:51634.service: Deactivated successfully. May 14 18:15:12.509535 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:15:12.510770 systemd-logind[1686]: Session 10 logged out. Waiting for processes to exit. May 14 18:15:12.511622 systemd-logind[1686]: Removed session 10. May 14 18:15:13.019862 systemd-networkd[1285]: cali6771ad4cbf6: Gained IPv6LL May 14 18:15:13.188591 containerd[1724]: time="2025-05-14T18:15:13.188310886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-49pnq,Uid:42f0af52-9025-4d2d-8452-e97d6dcfc33e,Namespace:calico-apiserver,Attempt:0,}" May 14 18:15:13.188591 containerd[1724]: time="2025-05-14T18:15:13.188466035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p469r,Uid:5e55bc54-125f-45b4-a2a9-22003bd5a2c5,Namespace:calico-system,Attempt:0,}" May 14 18:15:13.326966 containerd[1724]: time="2025-05-14T18:15:13.326876860Z" level=info msg="connecting to shim 6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71" address="unix:///run/containerd/s/64c5ea0f5e94d7d063ea65d0bb43b4b754a38a2c5ef06bd43f6c7f22bdc447a2" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:13.348824 systemd[1]: Started cri-containerd-6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71.scope - libcontainer container 6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71. May 14 18:15:13.437035 containerd[1724]: time="2025-05-14T18:15:13.437005359Z" level=info msg="connecting to shim ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf" address="unix:///run/containerd/s/fd797a4c1b2dc18b77d5284b673a3e9e78762408c60d5a2feea63f06dc4d0dae" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:13.456805 systemd[1]: Started cri-containerd-ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf.scope - libcontainer container ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf. May 14 18:15:13.479461 containerd[1724]: time="2025-05-14T18:15:13.479435710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-77j92,Uid:ab7f552a-1fe4-440b-ae6d-7b449d960ef9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71\"" May 14 18:15:13.606059 systemd-networkd[1285]: cali4797986e090: Link UP May 14 18:15:13.606167 systemd-networkd[1285]: cali4797986e090: Gained carrier May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.545 [INFO][5429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0 calico-apiserver-76c49d767b- calico-apiserver 42f0af52-9025-4d2d-8452-e97d6dcfc33e 756 0 2025-05-14 18:13:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76c49d767b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334.0.0-a-ef358d086b calico-apiserver-76c49d767b-49pnq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4797986e090 [] []}} ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-49pnq" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.545 [INFO][5429] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-49pnq" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.564 [INFO][5442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" HandleID="k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.570 [INFO][5442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" HandleID="k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290f30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334.0.0-a-ef358d086b", "pod":"calico-apiserver-76c49d767b-49pnq", "timestamp":"2025-05-14 18:15:13.564037939 +0000 UTC"}, Hostname:"ci-4334.0.0-a-ef358d086b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.570 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.570 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.570 [INFO][5442] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-ef358d086b' May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.572 [INFO][5442] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.576 [INFO][5442] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.582 [INFO][5442] ipam/ipam.go 489: Trying affinity for 192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.583 [INFO][5442] ipam/ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.585 [INFO][5442] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.585 [INFO][5442] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.587 [INFO][5442] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1 May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.591 [INFO][5442] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.598 [INFO][5442] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.51.197/26] block=192.168.51.192/26 handle="k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.599 [INFO][5442] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.197/26] handle="k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.599 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:13.622597 containerd[1724]: 2025-05-14 18:15:13.599 [INFO][5442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.197/26] IPv6=[] ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" HandleID="k8s-pod-network.12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Workload="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" May 14 18:15:13.623067 containerd[1724]: 2025-05-14 18:15:13.600 [INFO][5429] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-49pnq" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0", GenerateName:"calico-apiserver-76c49d767b-", Namespace:"calico-apiserver", SelfLink:"", UID:"42f0af52-9025-4d2d-8452-e97d6dcfc33e", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c49d767b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"", Pod:"calico-apiserver-76c49d767b-49pnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4797986e090", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:13.623067 containerd[1724]: 2025-05-14 18:15:13.601 [INFO][5429] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.51.197/32] ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-49pnq" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" May 14 18:15:13.623067 containerd[1724]: 2025-05-14 18:15:13.601 [INFO][5429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4797986e090 ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-49pnq" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" May 14 18:15:13.623067 containerd[1724]: 2025-05-14 18:15:13.603 [INFO][5429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-49pnq" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" May 14 18:15:13.623067 containerd[1724]: 2025-05-14 18:15:13.604 [INFO][5429] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-49pnq" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0", GenerateName:"calico-apiserver-76c49d767b-", Namespace:"calico-apiserver", SelfLink:"", UID:"42f0af52-9025-4d2d-8452-e97d6dcfc33e", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c49d767b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1", Pod:"calico-apiserver-76c49d767b-49pnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4797986e090", MAC:"ea:a3:63:d4:60:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:13.623067 containerd[1724]: 2025-05-14 18:15:13.618 [INFO][5429] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" Namespace="calico-apiserver" Pod="calico-apiserver-76c49d767b-49pnq" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-calico--apiserver--76c49d767b--49pnq-eth0" May 14 18:15:13.682707 containerd[1724]: time="2025-05-14T18:15:13.682623499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ffz6l,Uid:7d98339e-8cf1-447f-bf29-3e0a7594a179,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf\"" May 14 18:15:13.686905 containerd[1724]: time="2025-05-14T18:15:13.686876264Z" level=info msg="CreateContainer within sandbox \"ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:15:13.703781 systemd-networkd[1285]: cali2f7dc49389d: Link UP May 14 18:15:13.704752 systemd-networkd[1285]: cali2f7dc49389d: Gained carrier May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.601 [INFO][5449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0 csi-node-driver- calico-system 5e55bc54-125f-45b4-a2a9-22003bd5a2c5 597 0 2025-05-14 18:13:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4334.0.0-a-ef358d086b csi-node-driver-p469r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2f7dc49389d [] []}} ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Namespace="calico-system" Pod="csi-node-driver-p469r" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.601 [INFO][5449] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Namespace="calico-system" Pod="csi-node-driver-p469r" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.644 [INFO][5465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" HandleID="k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Workload="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.670 [INFO][5465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" HandleID="k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Workload="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-ef358d086b", "pod":"csi-node-driver-p469r", "timestamp":"2025-05-14 18:15:13.644205031 +0000 UTC"}, Hostname:"ci-4334.0.0-a-ef358d086b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.670 [INFO][5465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.670 [INFO][5465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.670 [INFO][5465] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-ef358d086b' May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.671 [INFO][5465] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.674 [INFO][5465] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.682 [INFO][5465] ipam/ipam.go 489: Trying affinity for 192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.684 [INFO][5465] ipam/ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.688 [INFO][5465] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.688 [INFO][5465] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.689 [INFO][5465] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542 May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.692 [INFO][5465] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.698 [INFO][5465] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.51.198/26] block=192.168.51.192/26 handle="k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.698 [INFO][5465] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.198/26] handle="k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" host="ci-4334.0.0-a-ef358d086b" May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.698 [INFO][5465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:13.717319 containerd[1724]: 2025-05-14 18:15:13.698 [INFO][5465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.198/26] IPv6=[] ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" HandleID="k8s-pod-network.5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Workload="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" May 14 18:15:13.717907 containerd[1724]: 2025-05-14 18:15:13.700 [INFO][5449] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Namespace="calico-system" Pod="csi-node-driver-p469r" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e55bc54-125f-45b4-a2a9-22003bd5a2c5", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"", Pod:"csi-node-driver-p469r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f7dc49389d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:13.717907 containerd[1724]: 2025-05-14 18:15:13.700 [INFO][5449] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.51.198/32] ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Namespace="calico-system" Pod="csi-node-driver-p469r" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" May 14 18:15:13.717907 containerd[1724]: 2025-05-14 18:15:13.700 [INFO][5449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f7dc49389d ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Namespace="calico-system" Pod="csi-node-driver-p469r" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" May 14 18:15:13.717907 containerd[1724]: 2025-05-14 18:15:13.704 [INFO][5449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Namespace="calico-system" Pod="csi-node-driver-p469r" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" May 14 18:15:13.717907 containerd[1724]: 2025-05-14 18:15:13.704 [INFO][5449] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Namespace="calico-system" Pod="csi-node-driver-p469r" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e55bc54-125f-45b4-a2a9-22003bd5a2c5", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-ef358d086b", ContainerID:"5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542", Pod:"csi-node-driver-p469r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f7dc49389d", MAC:"52:7b:c6:70:a1:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:13.717907 containerd[1724]: 2025-05-14 18:15:13.715 [INFO][5449] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" Namespace="calico-system" Pod="csi-node-driver-p469r" WorkloadEndpoint="ci--4334.0.0--a--ef358d086b-k8s-csi--node--driver--p469r-eth0" May 14 18:15:13.787819 systemd-networkd[1285]: calie060faca28d: Gained IPv6LL May 14 18:15:14.107765 systemd-networkd[1285]: cali467c8ce0282: Gained IPv6LL May 14 18:15:14.383991 containerd[1724]: time="2025-05-14T18:15:14.383946524Z" level=info msg="Container 23d7f4e89fce71a1dc583abbfe545fb4667b5bc807b244de8cf495a557c444d7: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:14.934029 containerd[1724]: time="2025-05-14T18:15:14.934000931Z" level=info msg="CreateContainer within sandbox \"ccbc24ae0eae918a5f8976198924a3048f0d1727de6060278d409463630d6abf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23d7f4e89fce71a1dc583abbfe545fb4667b5bc807b244de8cf495a557c444d7\"" May 14 18:15:14.935752 containerd[1724]: time="2025-05-14T18:15:14.935727234Z" level=info msg="StartContainer for \"23d7f4e89fce71a1dc583abbfe545fb4667b5bc807b244de8cf495a557c444d7\"" May 14 18:15:14.938402 containerd[1724]: time="2025-05-14T18:15:14.937523614Z" level=info msg="connecting to shim 23d7f4e89fce71a1dc583abbfe545fb4667b5bc807b244de8cf495a557c444d7" address="unix:///run/containerd/s/fd797a4c1b2dc18b77d5284b673a3e9e78762408c60d5a2feea63f06dc4d0dae" protocol=ttrpc version=3 May 14 18:15:14.943943 containerd[1724]: time="2025-05-14T18:15:14.943791635Z" level=info msg="connecting to shim 5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542" address="unix:///run/containerd/s/7a6cca2a5ea7ec15e9c4b52d502c99bc8df1262610eea1ee04b49e2e889d89df" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:14.960817 systemd[1]: Started cri-containerd-23d7f4e89fce71a1dc583abbfe545fb4667b5bc807b244de8cf495a557c444d7.scope - libcontainer container 23d7f4e89fce71a1dc583abbfe545fb4667b5bc807b244de8cf495a557c444d7. May 14 18:15:14.963586 systemd[1]: Started cri-containerd-5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542.scope - libcontainer container 5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542. May 14 18:15:14.997482 containerd[1724]: time="2025-05-14T18:15:14.997420925Z" level=info msg="connecting to shim 12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1" address="unix:///run/containerd/s/c5c24674d7a119e885a76fbfc4a2a1ea9fd0586a5cbd38e3383fdcd58561b6fe" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:15.015850 systemd[1]: Started cri-containerd-12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1.scope - libcontainer container 12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1. May 14 18:15:15.072790 containerd[1724]: time="2025-05-14T18:15:15.072764491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p469r,Uid:5e55bc54-125f-45b4-a2a9-22003bd5a2c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542\"" May 14 18:15:15.073620 containerd[1724]: time="2025-05-14T18:15:15.073498193Z" level=info msg="StartContainer for \"23d7f4e89fce71a1dc583abbfe545fb4667b5bc807b244de8cf495a557c444d7\" returns successfully" May 14 18:15:15.168021 containerd[1724]: time="2025-05-14T18:15:15.167993857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c49d767b-49pnq,Uid:42f0af52-9025-4d2d-8452-e97d6dcfc33e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1\"" May 14 18:15:15.195833 systemd-networkd[1285]: cali2f7dc49389d: Gained IPv6LL May 14 18:15:15.453159 kubelet[3171]: I0514 18:15:15.453006 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ffz6l" podStartSLOduration=111.452989595 podStartE2EDuration="1m51.452989595s" podCreationTimestamp="2025-05-14 18:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:15:15.45163576 +0000 UTC m=+114.332751191" watchObservedRunningTime="2025-05-14 18:15:15.452989595 +0000 UTC m=+114.334105027" May 14 18:15:15.515829 systemd-networkd[1285]: cali4797986e090: Gained IPv6LL May 14 18:15:17.617533 systemd[1]: Started sshd@8-10.200.8.47:22-10.200.16.10:51636.service - OpenSSH per-connection server daemon (10.200.16.10:51636). May 14 18:15:18.178501 containerd[1724]: time="2025-05-14T18:15:18.178466919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:18.180540 containerd[1724]: time="2025-05-14T18:15:18.180506942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 18:15:18.226431 containerd[1724]: time="2025-05-14T18:15:18.226377484Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:18.265014 sshd[5629]: Accepted publickey for core from 10.200.16.10 port 51636 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:15:18.265967 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:18.270042 systemd-logind[1686]: New session 11 of user core. May 14 18:15:18.272815 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:15:18.273696 containerd[1724]: time="2025-05-14T18:15:18.273647813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:18.274619 containerd[1724]: time="2025-05-14T18:15:18.274504287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 6.57813255s" May 14 18:15:18.274619 containerd[1724]: time="2025-05-14T18:15:18.274534987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 18:15:18.276071 containerd[1724]: time="2025-05-14T18:15:18.276029627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:15:18.296847 containerd[1724]: time="2025-05-14T18:15:18.296543530Z" level=info msg="CreateContainer within sandbox \"b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 18:15:18.485835 containerd[1724]: time="2025-05-14T18:15:18.485780948Z" level=info msg="Container 5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:18.628823 containerd[1724]: time="2025-05-14T18:15:18.628798413Z" level=info msg="CreateContainer within sandbox \"b9641b8d7473b6b98338413d419101235510c8963bf5d516d2a6b528350d30e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\"" May 14 18:15:18.629307 containerd[1724]: time="2025-05-14T18:15:18.629182103Z" level=info msg="StartContainer for \"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\"" May 14 18:15:18.631593 containerd[1724]: time="2025-05-14T18:15:18.631559325Z" level=info msg="connecting to shim 5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99" address="unix:///run/containerd/s/418fc20d237943f6fc404bb1425cd687395daabd7519b8905f78fa30ca919caa" protocol=ttrpc version=3 May 14 18:15:18.658226 systemd[1]: Started cri-containerd-5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99.scope - libcontainer container 5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99. May 14 18:15:18.776614 sshd[5635]: Connection closed by 10.200.16.10 port 51636 May 14 18:15:18.777893 sshd-session[5629]: pam_unix(sshd:session): session closed for user core May 14 18:15:18.781164 containerd[1724]: time="2025-05-14T18:15:18.781146538Z" level=info msg="StartContainer for \"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" returns successfully" May 14 18:15:18.781741 systemd[1]: sshd@8-10.200.8.47:22-10.200.16.10:51636.service: Deactivated successfully. May 14 18:15:18.783895 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:15:18.785181 systemd-logind[1686]: Session 11 logged out. Waiting for processes to exit. May 14 18:15:18.786269 systemd-logind[1686]: Removed session 11. May 14 18:15:19.486504 containerd[1724]: time="2025-05-14T18:15:19.486476267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"d1e77e3479890ad9d2aa5a207af929355994b5e2587bc45fd76632573a2a558f\" pid:5693 exited_at:{seconds:1747246519 nanos:486224193}" May 14 18:15:19.498670 kubelet[3171]: I0514 18:15:19.498345 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f9774cf66-vzhj6" podStartSLOduration=93.918582516 podStartE2EDuration="1m40.498327304s" podCreationTimestamp="2025-05-14 18:13:39 +0000 UTC" firstStartedPulling="2025-05-14 18:15:11.696048581 +0000 UTC m=+110.577164001" lastFinishedPulling="2025-05-14 18:15:18.275793281 +0000 UTC m=+117.156908789" observedRunningTime="2025-05-14 18:15:19.468405041 +0000 UTC m=+118.349520501" watchObservedRunningTime="2025-05-14 18:15:19.498327304 +0000 UTC m=+118.379442739" May 14 18:15:21.209943 containerd[1724]: time="2025-05-14T18:15:21.209905144Z" level=info msg="StopPodSandbox for \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\"" May 14 18:15:21.210343 containerd[1724]: time="2025-05-14T18:15:21.210043659Z" level=info msg="TearDown network for sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" successfully" May 14 18:15:21.210343 containerd[1724]: time="2025-05-14T18:15:21.210055873Z" level=info msg="StopPodSandbox for \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" returns successfully" May 14 18:15:21.210391 containerd[1724]: time="2025-05-14T18:15:21.210366161Z" level=info msg="RemovePodSandbox for \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\"" May 14 18:15:21.210410 containerd[1724]: time="2025-05-14T18:15:21.210387626Z" level=info msg="Forcibly stopping sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\"" May 14 18:15:21.210502 containerd[1724]: time="2025-05-14T18:15:21.210478474Z" level=info msg="TearDown network for sandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" successfully" May 14 18:15:21.211557 containerd[1724]: time="2025-05-14T18:15:21.211537897Z" level=info msg="Ensure that sandbox c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c in task-service has been cleanup successfully" May 14 18:15:23.077903 containerd[1724]: time="2025-05-14T18:15:23.077860998Z" level=info msg="RemovePodSandbox \"c8945f6b802fcce8dd85f2adb749a80618cabf38fe46e7a67245b793da436e8c\" returns successfully" May 14 18:15:23.888273 systemd[1]: Started sshd@9-10.200.8.47:22-10.200.16.10:57046.service - OpenSSH per-connection server daemon (10.200.16.10:57046). May 14 18:15:24.028443 containerd[1724]: time="2025-05-14T18:15:24.028409587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:24.071488 containerd[1724]: time="2025-05-14T18:15:24.071459174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 18:15:24.074286 containerd[1724]: time="2025-05-14T18:15:24.074243955Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:24.122113 containerd[1724]: time="2025-05-14T18:15:24.122052860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:24.122696 containerd[1724]: time="2025-05-14T18:15:24.122600356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 5.846544596s" May 14 18:15:24.122696 containerd[1724]: time="2025-05-14T18:15:24.122627670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 18:15:24.123701 containerd[1724]: time="2025-05-14T18:15:24.123583340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 18:15:24.124705 containerd[1724]: time="2025-05-14T18:15:24.124382316Z" level=info msg="CreateContainer within sandbox \"6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:15:24.331698 containerd[1724]: time="2025-05-14T18:15:24.330128230Z" level=info msg="Container 44c8c8b807a7cca59dd8cbf78b1ac0cb66c3df8c836a2c19dd6cc4fd8281c1ab: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:24.426606 containerd[1724]: time="2025-05-14T18:15:24.426584080Z" level=info msg="CreateContainer within sandbox \"6d8442cf9b36da936933d32b9895d78b2912d42427e0b48fb86eab0958f94b71\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"44c8c8b807a7cca59dd8cbf78b1ac0cb66c3df8c836a2c19dd6cc4fd8281c1ab\"" May 14 18:15:24.427016 containerd[1724]: time="2025-05-14T18:15:24.426979640Z" level=info msg="StartContainer for \"44c8c8b807a7cca59dd8cbf78b1ac0cb66c3df8c836a2c19dd6cc4fd8281c1ab\"" May 14 18:15:24.428178 containerd[1724]: time="2025-05-14T18:15:24.428066847Z" level=info msg="connecting to shim 44c8c8b807a7cca59dd8cbf78b1ac0cb66c3df8c836a2c19dd6cc4fd8281c1ab" address="unix:///run/containerd/s/64c5ea0f5e94d7d063ea65d0bb43b4b754a38a2c5ef06bd43f6c7f22bdc447a2" protocol=ttrpc version=3 May 14 18:15:24.446819 systemd[1]: Started cri-containerd-44c8c8b807a7cca59dd8cbf78b1ac0cb66c3df8c836a2c19dd6cc4fd8281c1ab.scope - libcontainer container 44c8c8b807a7cca59dd8cbf78b1ac0cb66c3df8c836a2c19dd6cc4fd8281c1ab. May 14 18:15:24.522137 sshd[5715]: Accepted publickey for core from 10.200.16.10 port 57046 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:15:24.523534 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:24.526929 systemd-logind[1686]: New session 12 of user core. May 14 18:15:24.531934 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:15:24.537148 containerd[1724]: time="2025-05-14T18:15:24.536032214Z" level=info msg="StartContainer for \"44c8c8b807a7cca59dd8cbf78b1ac0cb66c3df8c836a2c19dd6cc4fd8281c1ab\" returns successfully" May 14 18:15:25.042700 sshd[5750]: Connection closed by 10.200.16.10 port 57046 May 14 18:15:25.044823 sshd-session[5715]: pam_unix(sshd:session): session closed for user core May 14 18:15:25.047108 systemd[1]: sshd@9-10.200.8.47:22-10.200.16.10:57046.service: Deactivated successfully. May 14 18:15:25.051158 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:15:25.052394 systemd-logind[1686]: Session 12 logged out. Waiting for processes to exit. May 14 18:15:25.054575 systemd-logind[1686]: Removed session 12. May 14 18:15:25.490639 kubelet[3171]: I0514 18:15:25.490176 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76c49d767b-77j92" podStartSLOduration=97.847287194 podStartE2EDuration="1m48.490160777s" podCreationTimestamp="2025-05-14 18:13:37 +0000 UTC" firstStartedPulling="2025-05-14 18:15:13.480489554 +0000 UTC m=+112.361604989" lastFinishedPulling="2025-05-14 18:15:24.123363143 +0000 UTC m=+123.004478572" observedRunningTime="2025-05-14 18:15:25.478459814 +0000 UTC m=+124.359575250" watchObservedRunningTime="2025-05-14 18:15:25.490160777 +0000 UTC m=+124.371276209" May 14 18:15:26.970454 containerd[1724]: time="2025-05-14T18:15:26.970393014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:27.032875 containerd[1724]: time="2025-05-14T18:15:27.032844517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 18:15:27.035613 containerd[1724]: time="2025-05-14T18:15:27.035564046Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:27.080697 containerd[1724]: time="2025-05-14T18:15:27.080639491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:27.081247 containerd[1724]: time="2025-05-14T18:15:27.081173454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.95746334s" May 14 18:15:27.081247 containerd[1724]: time="2025-05-14T18:15:27.081197677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 18:15:27.082150 containerd[1724]: time="2025-05-14T18:15:27.082130379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:15:27.083412 containerd[1724]: time="2025-05-14T18:15:27.083256505Z" level=info msg="CreateContainer within sandbox \"5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 18:15:27.274359 containerd[1724]: time="2025-05-14T18:15:27.274333537Z" level=info msg="Container 243e3d33ab23f4a580c9518f8b366f8b0b046cba7245c2c2128bfe58b620753d: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:27.476287 containerd[1724]: time="2025-05-14T18:15:27.476264985Z" level=info msg="CreateContainer within sandbox \"5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"243e3d33ab23f4a580c9518f8b366f8b0b046cba7245c2c2128bfe58b620753d\"" May 14 18:15:27.476748 containerd[1724]: time="2025-05-14T18:15:27.476668930Z" level=info msg="StartContainer for \"243e3d33ab23f4a580c9518f8b366f8b0b046cba7245c2c2128bfe58b620753d\"" May 14 18:15:27.478846 containerd[1724]: time="2025-05-14T18:15:27.478806201Z" level=info msg="connecting to shim 243e3d33ab23f4a580c9518f8b366f8b0b046cba7245c2c2128bfe58b620753d" address="unix:///run/containerd/s/7a6cca2a5ea7ec15e9c4b52d502c99bc8df1262610eea1ee04b49e2e889d89df" protocol=ttrpc version=3 May 14 18:15:27.495855 systemd[1]: Started cri-containerd-243e3d33ab23f4a580c9518f8b366f8b0b046cba7245c2c2128bfe58b620753d.scope - libcontainer container 243e3d33ab23f4a580c9518f8b366f8b0b046cba7245c2c2128bfe58b620753d. May 14 18:15:27.573117 containerd[1724]: time="2025-05-14T18:15:27.573067458Z" level=info msg="StartContainer for \"243e3d33ab23f4a580c9518f8b366f8b0b046cba7245c2c2128bfe58b620753d\" returns successfully" May 14 18:15:28.981990 containerd[1724]: time="2025-05-14T18:15:28.981949912Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:28.984468 containerd[1724]: time="2025-05-14T18:15:28.984434495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 18:15:28.985875 containerd[1724]: time="2025-05-14T18:15:28.985835067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 1.903450764s" May 14 18:15:28.985939 containerd[1724]: time="2025-05-14T18:15:28.985879417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 18:15:28.987435 containerd[1724]: time="2025-05-14T18:15:28.987397652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 18:15:28.988331 containerd[1724]: time="2025-05-14T18:15:28.988302661Z" level=info msg="CreateContainer within sandbox \"12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:15:29.218609 containerd[1724]: time="2025-05-14T18:15:29.218578936Z" level=info msg="Container f0ee0e699aeb84e6b3ce66ed7a277a919b93ba3ac78e0864eb6259882d49d319: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:29.376860 containerd[1724]: time="2025-05-14T18:15:29.376839000Z" level=info msg="CreateContainer within sandbox \"12c8aa3897207f21d06630955620539dd56fbb5b433aaaae812156c8e103d5e1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f0ee0e699aeb84e6b3ce66ed7a277a919b93ba3ac78e0864eb6259882d49d319\"" May 14 18:15:29.378206 containerd[1724]: time="2025-05-14T18:15:29.377179925Z" level=info msg="StartContainer for \"f0ee0e699aeb84e6b3ce66ed7a277a919b93ba3ac78e0864eb6259882d49d319\"" May 14 18:15:29.378305 containerd[1724]: time="2025-05-14T18:15:29.378281303Z" level=info msg="connecting to shim f0ee0e699aeb84e6b3ce66ed7a277a919b93ba3ac78e0864eb6259882d49d319" address="unix:///run/containerd/s/c5c24674d7a119e885a76fbfc4a2a1ea9fd0586a5cbd38e3383fdcd58561b6fe" protocol=ttrpc version=3 May 14 18:15:29.394847 systemd[1]: Started cri-containerd-f0ee0e699aeb84e6b3ce66ed7a277a919b93ba3ac78e0864eb6259882d49d319.scope - libcontainer container f0ee0e699aeb84e6b3ce66ed7a277a919b93ba3ac78e0864eb6259882d49d319. May 14 18:15:29.472317 containerd[1724]: time="2025-05-14T18:15:29.472298039Z" level=info msg="StartContainer for \"f0ee0e699aeb84e6b3ce66ed7a277a919b93ba3ac78e0864eb6259882d49d319\" returns successfully" May 14 18:15:29.489909 kubelet[3171]: I0514 18:15:29.489339 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76c49d767b-49pnq" podStartSLOduration=98.671667992 podStartE2EDuration="1m52.489323852s" podCreationTimestamp="2025-05-14 18:13:37 +0000 UTC" firstStartedPulling="2025-05-14 18:15:15.168823419 +0000 UTC m=+114.049938843" lastFinishedPulling="2025-05-14 18:15:28.986479278 +0000 UTC m=+127.867594703" observedRunningTime="2025-05-14 18:15:29.488955565 +0000 UTC m=+128.370070992" watchObservedRunningTime="2025-05-14 18:15:29.489323852 +0000 UTC m=+128.370439290" May 14 18:15:30.156714 systemd[1]: Started sshd@10-10.200.8.47:22-10.200.16.10:39730.service - OpenSSH per-connection server daemon (10.200.16.10:39730). May 14 18:15:30.794327 sshd[5843]: Accepted publickey for core from 10.200.16.10 port 39730 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:15:30.795310 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:30.798741 systemd-logind[1686]: New session 13 of user core. May 14 18:15:30.803837 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:15:31.287623 sshd[5847]: Connection closed by 10.200.16.10 port 39730 May 14 18:15:31.288045 sshd-session[5843]: pam_unix(sshd:session): session closed for user core May 14 18:15:31.290368 systemd[1]: sshd@10-10.200.8.47:22-10.200.16.10:39730.service: Deactivated successfully. May 14 18:15:31.291871 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:15:31.293311 systemd-logind[1686]: Session 13 logged out. Waiting for processes to exit. May 14 18:15:31.294193 systemd-logind[1686]: Removed session 13. May 14 18:15:32.317303 containerd[1724]: time="2025-05-14T18:15:32.317259340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:32.319316 containerd[1724]: time="2025-05-14T18:15:32.319283395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 18:15:32.382303 containerd[1724]: time="2025-05-14T18:15:32.382252081Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:32.433020 containerd[1724]: time="2025-05-14T18:15:32.432976218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:32.433629 containerd[1724]: time="2025-05-14T18:15:32.433527510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.446089404s" May 14 18:15:32.433629 containerd[1724]: time="2025-05-14T18:15:32.433553757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 18:15:32.436155 containerd[1724]: time="2025-05-14T18:15:32.436038387Z" level=info msg="CreateContainer within sandbox \"5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 18:15:32.480255 update_engine[1688]: I20250514 18:15:32.480214 1688 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 18:15:32.480255 update_engine[1688]: I20250514 18:15:32.480252 1688 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480374 1688 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480659 1688 omaha_request_params.cc:62] Current group set to developer May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480800 1688 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480810 1688 update_attempter.cc:643] Scheduling an action processor start. May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480829 1688 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480861 1688 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480914 1688 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480918 1688 omaha_request_action.cc:272] Request: May 14 18:15:32.480981 update_engine[1688]: May 14 18:15:32.480981 update_engine[1688]: May 14 18:15:32.480981 update_engine[1688]: May 14 18:15:32.480981 update_engine[1688]: May 14 18:15:32.480981 update_engine[1688]: May 14 18:15:32.480981 update_engine[1688]: May 14 18:15:32.480981 update_engine[1688]: May 14 18:15:32.480981 update_engine[1688]: May 14 18:15:32.480981 update_engine[1688]: I20250514 18:15:32.480926 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 18:15:32.482114 update_engine[1688]: I20250514 18:15:32.482043 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 18:15:32.482757 update_engine[1688]: I20250514 18:15:32.482590 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 18:15:32.482988 locksmithd[1763]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 18:15:32.518831 update_engine[1688]: E20250514 18:15:32.518801 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 18:15:32.518890 update_engine[1688]: I20250514 18:15:32.518872 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 18:15:32.578568 containerd[1724]: time="2025-05-14T18:15:32.578510030Z" level=info msg="Container 7baa0699fd02cdef1e653354464796c893bd4670a99ea9fbfa264d58dccfd393: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:32.735117 containerd[1724]: time="2025-05-14T18:15:32.735081502Z" level=info msg="CreateContainer within sandbox \"5ccf6343d1e3e690f4d8ada9299e8c334e52550ec3a5496fed98f7c675554542\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7baa0699fd02cdef1e653354464796c893bd4670a99ea9fbfa264d58dccfd393\"" May 14 18:15:32.735476 containerd[1724]: time="2025-05-14T18:15:32.735427897Z" level=info msg="StartContainer for \"7baa0699fd02cdef1e653354464796c893bd4670a99ea9fbfa264d58dccfd393\"" May 14 18:15:32.737022 containerd[1724]: time="2025-05-14T18:15:32.736998052Z" level=info msg="connecting to shim 7baa0699fd02cdef1e653354464796c893bd4670a99ea9fbfa264d58dccfd393" address="unix:///run/containerd/s/7a6cca2a5ea7ec15e9c4b52d502c99bc8df1262610eea1ee04b49e2e889d89df" protocol=ttrpc version=3 May 14 18:15:32.755818 systemd[1]: Started cri-containerd-7baa0699fd02cdef1e653354464796c893bd4670a99ea9fbfa264d58dccfd393.scope - libcontainer container 7baa0699fd02cdef1e653354464796c893bd4670a99ea9fbfa264d58dccfd393. May 14 18:15:32.788415 containerd[1724]: time="2025-05-14T18:15:32.788395400Z" level=info msg="StartContainer for \"7baa0699fd02cdef1e653354464796c893bd4670a99ea9fbfa264d58dccfd393\" returns successfully" May 14 18:15:33.308674 kubelet[3171]: I0514 18:15:33.308654 3171 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 18:15:33.308972 kubelet[3171]: I0514 18:15:33.308752 3171 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 18:15:33.497609 kubelet[3171]: I0514 18:15:33.497458 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-p469r" podStartSLOduration=97.137333386 podStartE2EDuration="1m54.497439159s" podCreationTimestamp="2025-05-14 18:13:39 +0000 UTC" firstStartedPulling="2025-05-14 18:15:15.074046718 +0000 UTC m=+113.955162141" lastFinishedPulling="2025-05-14 18:15:32.434152491 +0000 UTC m=+131.315267914" observedRunningTime="2025-05-14 18:15:33.496318438 +0000 UTC m=+132.377433872" watchObservedRunningTime="2025-05-14 18:15:33.497439159 +0000 UTC m=+132.378554633" May 14 18:15:36.399612 systemd[1]: Started sshd@11-10.200.8.47:22-10.200.16.10:39746.service - OpenSSH per-connection server daemon (10.200.16.10:39746). May 14 18:15:37.031397 sshd[5898]: Accepted publickey for core from 10.200.16.10 port 39746 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:15:37.032551 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:37.036469 systemd-logind[1686]: New session 14 of user core. May 14 18:15:37.046816 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:15:37.467859 containerd[1724]: time="2025-05-14T18:15:37.467617442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"1489515dcd5fa84009c1cd133f814bb3f457c9679447a9595314e4fbcf46ce4c\" pid:5921 exited_at:{seconds:1747246537 nanos:467368189}" May 14 18:15:37.532737 sshd[5900]: Connection closed by 10.200.16.10 port 39746 May 14 18:15:37.533119 sshd-session[5898]: pam_unix(sshd:session): session closed for user core May 14 18:15:37.535559 systemd[1]: sshd@11-10.200.8.47:22-10.200.16.10:39746.service: Deactivated successfully. May 14 18:15:37.537034 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:15:37.538536 systemd-logind[1686]: Session 14 logged out. Waiting for processes to exit. May 14 18:15:37.539377 systemd-logind[1686]: Removed session 14. May 14 18:15:42.484256 update_engine[1688]: I20250514 18:15:42.484200 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 18:15:42.484590 update_engine[1688]: I20250514 18:15:42.484427 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 18:15:42.484818 update_engine[1688]: I20250514 18:15:42.484797 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 18:15:42.515361 update_engine[1688]: E20250514 18:15:42.515322 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 18:15:42.515451 update_engine[1688]: I20250514 18:15:42.515381 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 18:15:42.650491 systemd[1]: Started sshd@12-10.200.8.47:22-10.200.16.10:51964.service - OpenSSH per-connection server daemon (10.200.16.10:51964). May 14 18:15:43.302803 sshd[5938]: Accepted publickey for core from 10.200.16.10 port 51964 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:15:43.303811 sshd-session[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:43.307593 systemd-logind[1686]: New session 15 of user core. May 14 18:15:43.312861 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:15:43.799369 sshd[5940]: Connection closed by 10.200.16.10 port 51964 May 14 18:15:43.799796 sshd-session[5938]: pam_unix(sshd:session): session closed for user core May 14 18:15:43.801839 systemd[1]: sshd@12-10.200.8.47:22-10.200.16.10:51964.service: Deactivated successfully. May 14 18:15:43.803465 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:15:43.804633 systemd-logind[1686]: Session 15 logged out. Waiting for processes to exit. May 14 18:15:43.806141 systemd-logind[1686]: Removed session 15. May 14 18:15:48.917573 systemd[1]: Started sshd@13-10.200.8.47:22-10.200.16.10:52146.service - OpenSSH per-connection server daemon (10.200.16.10:52146). May 14 18:15:49.482499 containerd[1724]: time="2025-05-14T18:15:49.482454217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"e62e6a30525063ba7b188f461a63a6948f0809743d87718dbd89742dd01bfb83\" pid:5976 exited_at:{seconds:1747246549 nanos:482169379}" May 14 18:15:49.550036 sshd[5960]: Accepted publickey for core from 10.200.16.10 port 52146 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:15:49.551010 sshd-session[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:49.555339 systemd-logind[1686]: New session 16 of user core. May 14 18:15:49.560810 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:15:50.037724 sshd[5987]: Connection closed by 10.200.16.10 port 52146 May 14 18:15:50.038097 sshd-session[5960]: pam_unix(sshd:session): session closed for user core May 14 18:15:50.040197 systemd[1]: sshd@13-10.200.8.47:22-10.200.16.10:52146.service: Deactivated successfully. May 14 18:15:50.041771 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:15:50.042865 systemd-logind[1686]: Session 16 logged out. Waiting for processes to exit. May 14 18:15:50.043771 systemd-logind[1686]: Removed session 16. May 14 18:15:52.482135 update_engine[1688]: I20250514 18:15:52.482077 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 18:15:52.482452 update_engine[1688]: I20250514 18:15:52.482306 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 18:15:52.482654 update_engine[1688]: I20250514 18:15:52.482633 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 18:15:52.505092 update_engine[1688]: E20250514 18:15:52.505051 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 18:15:52.505192 update_engine[1688]: I20250514 18:15:52.505112 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 18:15:55.151468 systemd[1]: Started sshd@14-10.200.8.47:22-10.200.16.10:52154.service - OpenSSH per-connection server daemon (10.200.16.10:52154). May 14 18:15:55.784895 sshd[6004]: Accepted publickey for core from 10.200.16.10 port 52154 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:15:55.785934 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:55.789748 systemd-logind[1686]: New session 17 of user core. May 14 18:15:55.794838 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:15:56.274497 sshd[6008]: Connection closed by 10.200.16.10 port 52154 May 14 18:15:56.274897 sshd-session[6004]: pam_unix(sshd:session): session closed for user core May 14 18:15:56.277654 systemd[1]: sshd@14-10.200.8.47:22-10.200.16.10:52154.service: Deactivated successfully. May 14 18:15:56.279486 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:15:56.280186 systemd-logind[1686]: Session 17 logged out. Waiting for processes to exit. May 14 18:15:56.281235 systemd-logind[1686]: Removed session 17. May 14 18:16:01.385486 systemd[1]: Started sshd@15-10.200.8.47:22-10.200.16.10:40454.service - OpenSSH per-connection server daemon (10.200.16.10:40454). May 14 18:16:01.659091 containerd[1724]: time="2025-05-14T18:16:01.659014031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"ea87bfb8f238f1fb71c929158657fba4b4b071790f0d9b6038f0494705fb4833\" pid:6037 exited_at:{seconds:1747246561 nanos:658792318}" May 14 18:16:02.015462 sshd[6021]: Accepted publickey for core from 10.200.16.10 port 40454 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:02.017043 sshd-session[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:02.021050 systemd-logind[1686]: New session 18 of user core. May 14 18:16:02.026841 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:16:02.480196 update_engine[1688]: I20250514 18:16:02.480154 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 18:16:02.480441 update_engine[1688]: I20250514 18:16:02.480350 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 18:16:02.480642 update_engine[1688]: I20250514 18:16:02.480608 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 18:16:02.517474 sshd[6046]: Connection closed by 10.200.16.10 port 40454 May 14 18:16:02.517844 sshd-session[6021]: pam_unix(sshd:session): session closed for user core May 14 18:16:02.520474 systemd[1]: sshd@15-10.200.8.47:22-10.200.16.10:40454.service: Deactivated successfully. May 14 18:16:02.521632 update_engine[1688]: E20250514 18:16:02.520792 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.520843 1688 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.520848 1688 omaha_request_action.cc:617] Omaha request response: May 14 18:16:02.521632 update_engine[1688]: E20250514 18:16:02.520915 1688 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.520931 1688 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.520934 1688 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.520939 1688 update_attempter.cc:306] Processing Done. May 14 18:16:02.521632 update_engine[1688]: E20250514 18:16:02.520953 1688 update_attempter.cc:619] Update failed. May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.520958 1688 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.520961 1688 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.520967 1688 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.521020 1688 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.521041 1688 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 18:16:02.521632 update_engine[1688]: I20250514 18:16:02.521044 1688 omaha_request_action.cc:272] Request: May 14 18:16:02.521632 update_engine[1688]: May 14 18:16:02.521632 update_engine[1688]: May 14 18:16:02.521965 update_engine[1688]: May 14 18:16:02.521965 update_engine[1688]: May 14 18:16:02.521965 update_engine[1688]: May 14 18:16:02.521965 update_engine[1688]: May 14 18:16:02.521965 update_engine[1688]: I20250514 18:16:02.521049 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 18:16:02.521965 update_engine[1688]: I20250514 18:16:02.521160 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 18:16:02.521965 update_engine[1688]: I20250514 18:16:02.521441 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 18:16:02.522227 locksmithd[1763]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 18:16:02.523024 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:16:02.524101 systemd-logind[1686]: Session 18 logged out. Waiting for processes to exit. May 14 18:16:02.524993 systemd-logind[1686]: Removed session 18. May 14 18:16:02.556146 update_engine[1688]: E20250514 18:16:02.556116 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 18:16:02.556225 update_engine[1688]: I20250514 18:16:02.556167 1688 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 18:16:02.556225 update_engine[1688]: I20250514 18:16:02.556175 1688 omaha_request_action.cc:617] Omaha request response: May 14 18:16:02.556225 update_engine[1688]: I20250514 18:16:02.556181 1688 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 18:16:02.556225 update_engine[1688]: I20250514 18:16:02.556185 1688 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 18:16:02.556225 update_engine[1688]: I20250514 18:16:02.556190 1688 update_attempter.cc:306] Processing Done. May 14 18:16:02.556225 update_engine[1688]: I20250514 18:16:02.556197 1688 update_attempter.cc:310] Error event sent. May 14 18:16:02.556225 update_engine[1688]: I20250514 18:16:02.556205 1688 update_check_scheduler.cc:74] Next update check in 47m30s May 14 18:16:02.556501 locksmithd[1763]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 18:16:02.641811 systemd[1]: Started sshd@16-10.200.8.47:22-10.200.16.10:40464.service - OpenSSH per-connection server daemon (10.200.16.10:40464). May 14 18:16:03.271232 sshd[6059]: Accepted publickey for core from 10.200.16.10 port 40464 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:03.272355 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:03.275833 systemd-logind[1686]: New session 19 of user core. May 14 18:16:03.281930 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:16:03.800134 sshd[6061]: Connection closed by 10.200.16.10 port 40464 May 14 18:16:03.800503 sshd-session[6059]: pam_unix(sshd:session): session closed for user core May 14 18:16:03.803230 systemd[1]: sshd@16-10.200.8.47:22-10.200.16.10:40464.service: Deactivated successfully. May 14 18:16:03.804699 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:16:03.805392 systemd-logind[1686]: Session 19 logged out. Waiting for processes to exit. May 14 18:16:03.806563 systemd-logind[1686]: Removed session 19. May 14 18:16:03.911160 systemd[1]: Started sshd@17-10.200.8.47:22-10.200.16.10:40478.service - OpenSSH per-connection server daemon (10.200.16.10:40478). May 14 18:16:04.546142 sshd[6071]: Accepted publickey for core from 10.200.16.10 port 40478 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:04.547100 sshd-session[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:04.550865 systemd-logind[1686]: New session 20 of user core. May 14 18:16:04.556803 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:16:05.038950 sshd[6073]: Connection closed by 10.200.16.10 port 40478 May 14 18:16:05.039340 sshd-session[6071]: pam_unix(sshd:session): session closed for user core May 14 18:16:05.041888 systemd[1]: sshd@17-10.200.8.47:22-10.200.16.10:40478.service: Deactivated successfully. May 14 18:16:05.043479 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:16:05.044274 systemd-logind[1686]: Session 20 logged out. Waiting for processes to exit. May 14 18:16:05.045273 systemd-logind[1686]: Removed session 20. May 14 18:16:07.469135 containerd[1724]: time="2025-05-14T18:16:07.469094735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"6c1d5c56363e9c3cfd64d0b76226aa919c9cfdb0e7290a31286a20cb0e797104\" pid:6096 exited_at:{seconds:1747246567 nanos:468866538}" May 14 18:16:10.162657 systemd[1]: Started sshd@18-10.200.8.47:22-10.200.16.10:52858.service - OpenSSH per-connection server daemon (10.200.16.10:52858). May 14 18:16:10.858809 sshd[6110]: Accepted publickey for core from 10.200.16.10 port 52858 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:10.859912 sshd-session[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:10.863763 systemd-logind[1686]: New session 21 of user core. May 14 18:16:10.869826 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:16:11.420471 sshd[6112]: Connection closed by 10.200.16.10 port 52858 May 14 18:16:11.420881 sshd-session[6110]: pam_unix(sshd:session): session closed for user core May 14 18:16:11.423031 systemd[1]: sshd@18-10.200.8.47:22-10.200.16.10:52858.service: Deactivated successfully. May 14 18:16:11.424717 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:16:11.426253 systemd-logind[1686]: Session 21 logged out. Waiting for processes to exit. May 14 18:16:11.426982 systemd-logind[1686]: Removed session 21. May 14 18:16:16.537859 systemd[1]: Started sshd@19-10.200.8.47:22-10.200.16.10:52860.service - OpenSSH per-connection server daemon (10.200.16.10:52860). May 14 18:16:17.175025 sshd[6124]: Accepted publickey for core from 10.200.16.10 port 52860 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:17.176254 sshd-session[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:17.180382 systemd-logind[1686]: New session 22 of user core. May 14 18:16:17.183845 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:16:17.683018 sshd[6126]: Connection closed by 10.200.16.10 port 52860 May 14 18:16:17.683440 sshd-session[6124]: pam_unix(sshd:session): session closed for user core May 14 18:16:17.685897 systemd[1]: sshd@19-10.200.8.47:22-10.200.16.10:52860.service: Deactivated successfully. May 14 18:16:17.687694 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:16:17.689329 systemd-logind[1686]: Session 22 logged out. Waiting for processes to exit. May 14 18:16:17.690255 systemd-logind[1686]: Removed session 22. May 14 18:16:19.484139 containerd[1724]: time="2025-05-14T18:16:19.484098767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"6ba7be4f64aafc1bd82b7efacb376b4b4b4bfe488e878580c18dc45ce82dfd5e\" pid:6150 exited_at:{seconds:1747246579 nanos:483881090}" May 14 18:16:22.800552 systemd[1]: Started sshd@20-10.200.8.47:22-10.200.16.10:60766.service - OpenSSH per-connection server daemon (10.200.16.10:60766). May 14 18:16:23.443022 sshd[6162]: Accepted publickey for core from 10.200.16.10 port 60766 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:23.444121 sshd-session[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:23.447797 systemd-logind[1686]: New session 23 of user core. May 14 18:16:23.450866 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:16:23.930615 sshd[6164]: Connection closed by 10.200.16.10 port 60766 May 14 18:16:23.931031 sshd-session[6162]: pam_unix(sshd:session): session closed for user core May 14 18:16:23.933841 systemd[1]: sshd@20-10.200.8.47:22-10.200.16.10:60766.service: Deactivated successfully. May 14 18:16:23.935612 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:16:23.936314 systemd-logind[1686]: Session 23 logged out. Waiting for processes to exit. May 14 18:16:23.937374 systemd-logind[1686]: Removed session 23. May 14 18:16:29.043441 systemd[1]: Started sshd@21-10.200.8.47:22-10.200.16.10:40796.service - OpenSSH per-connection server daemon (10.200.16.10:40796). May 14 18:16:29.675780 sshd[6187]: Accepted publickey for core from 10.200.16.10 port 40796 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:29.676845 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:29.680267 systemd-logind[1686]: New session 24 of user core. May 14 18:16:29.684831 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:16:30.167223 sshd[6189]: Connection closed by 10.200.16.10 port 40796 May 14 18:16:30.167591 sshd-session[6187]: pam_unix(sshd:session): session closed for user core May 14 18:16:30.170212 systemd[1]: sshd@21-10.200.8.47:22-10.200.16.10:40796.service: Deactivated successfully. May 14 18:16:30.171867 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:16:30.172488 systemd-logind[1686]: Session 24 logged out. Waiting for processes to exit. May 14 18:16:30.173455 systemd-logind[1686]: Removed session 24. May 14 18:16:35.284719 systemd[1]: Started sshd@22-10.200.8.47:22-10.200.16.10:40808.service - OpenSSH per-connection server daemon (10.200.16.10:40808). May 14 18:16:35.916270 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 40808 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:35.917306 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:35.921502 systemd-logind[1686]: New session 25 of user core. May 14 18:16:35.925840 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:16:36.420783 sshd[6209]: Connection closed by 10.200.16.10 port 40808 May 14 18:16:36.421168 sshd-session[6207]: pam_unix(sshd:session): session closed for user core May 14 18:16:36.423989 systemd[1]: sshd@22-10.200.8.47:22-10.200.16.10:40808.service: Deactivated successfully. May 14 18:16:36.425816 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:16:36.426439 systemd-logind[1686]: Session 25 logged out. Waiting for processes to exit. May 14 18:16:36.427506 systemd-logind[1686]: Removed session 25. May 14 18:16:37.467150 containerd[1724]: time="2025-05-14T18:16:37.467086692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"37b91b18898ce6fa552795679ade5377254cdfa66025a2ecca3f98390c27a4cd\" pid:6238 exited_at:{seconds:1747246597 nanos:466803937}" May 14 18:16:41.540476 systemd[1]: Started sshd@23-10.200.8.47:22-10.200.16.10:58244.service - OpenSSH per-connection server daemon (10.200.16.10:58244). May 14 18:16:42.171405 sshd[6252]: Accepted publickey for core from 10.200.16.10 port 58244 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:42.172580 sshd-session[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:42.176572 systemd-logind[1686]: New session 26 of user core. May 14 18:16:42.181816 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:16:42.669018 sshd[6265]: Connection closed by 10.200.16.10 port 58244 May 14 18:16:42.669429 sshd-session[6252]: pam_unix(sshd:session): session closed for user core May 14 18:16:42.671552 systemd[1]: sshd@23-10.200.8.47:22-10.200.16.10:58244.service: Deactivated successfully. May 14 18:16:42.673113 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:16:42.674664 systemd-logind[1686]: Session 26 logged out. Waiting for processes to exit. May 14 18:16:42.675409 systemd-logind[1686]: Removed session 26. May 14 18:16:47.780403 systemd[1]: Started sshd@24-10.200.8.47:22-10.200.16.10:58260.service - OpenSSH per-connection server daemon (10.200.16.10:58260). May 14 18:16:48.414430 sshd[6278]: Accepted publickey for core from 10.200.16.10 port 58260 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:48.415463 sshd-session[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:48.419384 systemd-logind[1686]: New session 27 of user core. May 14 18:16:48.423875 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 18:16:48.911290 sshd[6280]: Connection closed by 10.200.16.10 port 58260 May 14 18:16:48.911734 sshd-session[6278]: pam_unix(sshd:session): session closed for user core May 14 18:16:48.914437 systemd[1]: sshd@24-10.200.8.47:22-10.200.16.10:58260.service: Deactivated successfully. May 14 18:16:48.915926 systemd[1]: session-27.scope: Deactivated successfully. May 14 18:16:48.916564 systemd-logind[1686]: Session 27 logged out. Waiting for processes to exit. May 14 18:16:48.917622 systemd-logind[1686]: Removed session 27. May 14 18:16:49.024100 systemd[1]: Started sshd@25-10.200.8.47:22-10.200.16.10:47134.service - OpenSSH per-connection server daemon (10.200.16.10:47134). May 14 18:16:49.483169 containerd[1724]: time="2025-05-14T18:16:49.483120975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"da740112938c24c10d09f4739b3f2022f91d1dffb408d2254f685505b5f0ea71\" pid:6307 exited_at:{seconds:1747246609 nanos:482925659}" May 14 18:16:49.653041 sshd[6292]: Accepted publickey for core from 10.200.16.10 port 47134 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:49.654011 sshd-session[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:49.657765 systemd-logind[1686]: New session 28 of user core. May 14 18:16:49.662131 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 18:16:50.188186 sshd[6316]: Connection closed by 10.200.16.10 port 47134 May 14 18:16:50.188609 sshd-session[6292]: pam_unix(sshd:session): session closed for user core May 14 18:16:50.191887 systemd[1]: sshd@25-10.200.8.47:22-10.200.16.10:47134.service: Deactivated successfully. May 14 18:16:50.193392 systemd[1]: session-28.scope: Deactivated successfully. May 14 18:16:50.194174 systemd-logind[1686]: Session 28 logged out. Waiting for processes to exit. May 14 18:16:50.195180 systemd-logind[1686]: Removed session 28. May 14 18:16:50.303159 systemd[1]: Started sshd@26-10.200.8.47:22-10.200.16.10:47136.service - OpenSSH per-connection server daemon (10.200.16.10:47136). May 14 18:16:50.933197 sshd[6326]: Accepted publickey for core from 10.200.16.10 port 47136 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:50.934228 sshd-session[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:50.938213 systemd-logind[1686]: New session 29 of user core. May 14 18:16:50.940849 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 18:16:52.187321 sshd[6328]: Connection closed by 10.200.16.10 port 47136 May 14 18:16:52.187784 sshd-session[6326]: pam_unix(sshd:session): session closed for user core May 14 18:16:52.191094 systemd[1]: sshd@26-10.200.8.47:22-10.200.16.10:47136.service: Deactivated successfully. May 14 18:16:52.192836 systemd[1]: session-29.scope: Deactivated successfully. May 14 18:16:52.193706 systemd-logind[1686]: Session 29 logged out. Waiting for processes to exit. May 14 18:16:52.194768 systemd-logind[1686]: Removed session 29. May 14 18:16:52.299235 systemd[1]: Started sshd@27-10.200.8.47:22-10.200.16.10:47152.service - OpenSSH per-connection server daemon (10.200.16.10:47152). May 14 18:16:52.931305 sshd[6345]: Accepted publickey for core from 10.200.16.10 port 47152 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:52.932320 sshd-session[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:52.936195 systemd-logind[1686]: New session 30 of user core. May 14 18:16:52.940826 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 18:16:53.495508 sshd[6347]: Connection closed by 10.200.16.10 port 47152 May 14 18:16:53.495920 sshd-session[6345]: pam_unix(sshd:session): session closed for user core May 14 18:16:53.498719 systemd[1]: sshd@27-10.200.8.47:22-10.200.16.10:47152.service: Deactivated successfully. May 14 18:16:53.500416 systemd[1]: session-30.scope: Deactivated successfully. May 14 18:16:53.501069 systemd-logind[1686]: Session 30 logged out. Waiting for processes to exit. May 14 18:16:53.502573 systemd-logind[1686]: Removed session 30. May 14 18:16:53.607067 systemd[1]: Started sshd@28-10.200.8.47:22-10.200.16.10:47168.service - OpenSSH per-connection server daemon (10.200.16.10:47168). May 14 18:16:54.237087 sshd[6357]: Accepted publickey for core from 10.200.16.10 port 47168 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:16:54.238149 sshd-session[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:54.242408 systemd-logind[1686]: New session 31 of user core. May 14 18:16:54.248822 systemd[1]: Started session-31.scope - Session 31 of User core. May 14 18:16:54.730347 sshd[6359]: Connection closed by 10.200.16.10 port 47168 May 14 18:16:54.730739 sshd-session[6357]: pam_unix(sshd:session): session closed for user core May 14 18:16:54.733292 systemd[1]: sshd@28-10.200.8.47:22-10.200.16.10:47168.service: Deactivated successfully. May 14 18:16:54.734953 systemd[1]: session-31.scope: Deactivated successfully. May 14 18:16:54.735643 systemd-logind[1686]: Session 31 logged out. Waiting for processes to exit. May 14 18:16:54.736775 systemd-logind[1686]: Removed session 31. May 14 18:16:59.845822 systemd[1]: Started sshd@29-10.200.8.47:22-10.200.16.10:48116.service - OpenSSH per-connection server daemon (10.200.16.10:48116). May 14 18:17:00.483515 sshd[6373]: Accepted publickey for core from 10.200.16.10 port 48116 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:00.484726 sshd-session[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:00.488118 systemd-logind[1686]: New session 32 of user core. May 14 18:17:00.494896 systemd[1]: Started session-32.scope - Session 32 of User core. May 14 18:17:00.971639 sshd[6375]: Connection closed by 10.200.16.10 port 48116 May 14 18:17:00.972063 sshd-session[6373]: pam_unix(sshd:session): session closed for user core May 14 18:17:00.974344 systemd[1]: sshd@29-10.200.8.47:22-10.200.16.10:48116.service: Deactivated successfully. May 14 18:17:00.975955 systemd[1]: session-32.scope: Deactivated successfully. May 14 18:17:00.977070 systemd-logind[1686]: Session 32 logged out. Waiting for processes to exit. May 14 18:17:00.978230 systemd-logind[1686]: Removed session 32. May 14 18:17:01.672718 containerd[1724]: time="2025-05-14T18:17:01.672436323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"0a6ca38c915db98ece2dd95543b6cf1fce80d5166e57c56bdbab82d3168bfa5a\" pid:6397 exited_at:{seconds:1747246621 nanos:672253355}" May 14 18:17:06.095065 systemd[1]: Started sshd@30-10.200.8.47:22-10.200.16.10:48124.service - OpenSSH per-connection server daemon (10.200.16.10:48124). May 14 18:17:06.727902 sshd[6407]: Accepted publickey for core from 10.200.16.10 port 48124 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:06.728941 sshd-session[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:06.732830 systemd-logind[1686]: New session 33 of user core. May 14 18:17:06.736890 systemd[1]: Started session-33.scope - Session 33 of User core. May 14 18:17:07.217823 sshd[6409]: Connection closed by 10.200.16.10 port 48124 May 14 18:17:07.218729 sshd-session[6407]: pam_unix(sshd:session): session closed for user core May 14 18:17:07.222299 systemd[1]: sshd@30-10.200.8.47:22-10.200.16.10:48124.service: Deactivated successfully. May 14 18:17:07.223937 systemd[1]: session-33.scope: Deactivated successfully. May 14 18:17:07.224573 systemd-logind[1686]: Session 33 logged out. Waiting for processes to exit. May 14 18:17:07.226010 systemd-logind[1686]: Removed session 33. May 14 18:17:07.474316 containerd[1724]: time="2025-05-14T18:17:07.474226301Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"8a7dba0a24dc7b77ce7e3ebb6dcdcb23b1c0be26fd373e3ad7c4f1dc52312213\" pid:6432 exited_at:{seconds:1747246627 nanos:473837753}" May 14 18:17:12.330455 systemd[1]: Started sshd@31-10.200.8.47:22-10.200.16.10:51848.service - OpenSSH per-connection server daemon (10.200.16.10:51848). May 14 18:17:12.960226 sshd[6445]: Accepted publickey for core from 10.200.16.10 port 51848 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:12.961314 sshd-session[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:12.965068 systemd-logind[1686]: New session 34 of user core. May 14 18:17:12.971821 systemd[1]: Started session-34.scope - Session 34 of User core. May 14 18:17:13.452862 sshd[6447]: Connection closed by 10.200.16.10 port 51848 May 14 18:17:13.453291 sshd-session[6445]: pam_unix(sshd:session): session closed for user core May 14 18:17:13.455746 systemd[1]: sshd@31-10.200.8.47:22-10.200.16.10:51848.service: Deactivated successfully. May 14 18:17:13.457350 systemd[1]: session-34.scope: Deactivated successfully. May 14 18:17:13.458598 systemd-logind[1686]: Session 34 logged out. Waiting for processes to exit. May 14 18:17:13.459978 systemd-logind[1686]: Removed session 34. May 14 18:17:18.572547 systemd[1]: Started sshd@32-10.200.8.47:22-10.200.16.10:41610.service - OpenSSH per-connection server daemon (10.200.16.10:41610). May 14 18:17:19.208137 sshd[6460]: Accepted publickey for core from 10.200.16.10 port 41610 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:19.209191 sshd-session[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:19.213173 systemd-logind[1686]: New session 35 of user core. May 14 18:17:19.221842 systemd[1]: Started session-35.scope - Session 35 of User core. May 14 18:17:19.484889 containerd[1724]: time="2025-05-14T18:17:19.484823701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"16abc218c25a70d9c51a73e7a8791ac6c7c62e237633d1ea161e9e48fc61575b\" pid:6475 exited_at:{seconds:1747246639 nanos:484612770}" May 14 18:17:19.699968 sshd[6462]: Connection closed by 10.200.16.10 port 41610 May 14 18:17:19.700377 sshd-session[6460]: pam_unix(sshd:session): session closed for user core May 14 18:17:19.702924 systemd[1]: sshd@32-10.200.8.47:22-10.200.16.10:41610.service: Deactivated successfully. May 14 18:17:19.704526 systemd[1]: session-35.scope: Deactivated successfully. May 14 18:17:19.705156 systemd-logind[1686]: Session 35 logged out. Waiting for processes to exit. May 14 18:17:19.706298 systemd-logind[1686]: Removed session 35. May 14 18:17:24.825774 systemd[1]: Started sshd@33-10.200.8.47:22-10.200.16.10:41614.service - OpenSSH per-connection server daemon (10.200.16.10:41614). May 14 18:17:25.454159 sshd[6497]: Accepted publickey for core from 10.200.16.10 port 41614 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:25.455191 sshd-session[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:25.458714 systemd-logind[1686]: New session 36 of user core. May 14 18:17:25.465849 systemd[1]: Started session-36.scope - Session 36 of User core. May 14 18:17:25.955931 sshd[6501]: Connection closed by 10.200.16.10 port 41614 May 14 18:17:25.956297 sshd-session[6497]: pam_unix(sshd:session): session closed for user core May 14 18:17:25.959076 systemd[1]: sshd@33-10.200.8.47:22-10.200.16.10:41614.service: Deactivated successfully. May 14 18:17:25.960747 systemd[1]: session-36.scope: Deactivated successfully. May 14 18:17:25.961402 systemd-logind[1686]: Session 36 logged out. Waiting for processes to exit. May 14 18:17:25.962441 systemd-logind[1686]: Removed session 36. May 14 18:17:31.071569 systemd[1]: Started sshd@34-10.200.8.47:22-10.200.16.10:43432.service - OpenSSH per-connection server daemon (10.200.16.10:43432). May 14 18:17:31.703838 sshd[6513]: Accepted publickey for core from 10.200.16.10 port 43432 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:31.705755 sshd-session[6513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:31.710512 systemd-logind[1686]: New session 37 of user core. May 14 18:17:31.718811 systemd[1]: Started session-37.scope - Session 37 of User core. May 14 18:17:32.197758 sshd[6515]: Connection closed by 10.200.16.10 port 43432 May 14 18:17:32.198136 sshd-session[6513]: pam_unix(sshd:session): session closed for user core May 14 18:17:32.200780 systemd[1]: sshd@34-10.200.8.47:22-10.200.16.10:43432.service: Deactivated successfully. May 14 18:17:32.202393 systemd[1]: session-37.scope: Deactivated successfully. May 14 18:17:32.203096 systemd-logind[1686]: Session 37 logged out. Waiting for processes to exit. May 14 18:17:32.204205 systemd-logind[1686]: Removed session 37. May 14 18:17:37.313620 systemd[1]: Started sshd@35-10.200.8.47:22-10.200.16.10:43434.service - OpenSSH per-connection server daemon (10.200.16.10:43434). May 14 18:17:37.467608 containerd[1724]: time="2025-05-14T18:17:37.467568080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"290614c6db6b8364adc4006d15e4e61c37640729444b76831aada15a593c684a\" pid:6544 exited_at:{seconds:1747246657 nanos:467272792}" May 14 18:17:37.943429 sshd[6529]: Accepted publickey for core from 10.200.16.10 port 43434 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:37.944633 sshd-session[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:37.948620 systemd-logind[1686]: New session 38 of user core. May 14 18:17:37.951835 systemd[1]: Started session-38.scope - Session 38 of User core. May 14 18:17:38.446422 sshd[6556]: Connection closed by 10.200.16.10 port 43434 May 14 18:17:38.446840 sshd-session[6529]: pam_unix(sshd:session): session closed for user core May 14 18:17:38.449801 systemd[1]: sshd@35-10.200.8.47:22-10.200.16.10:43434.service: Deactivated successfully. May 14 18:17:38.451486 systemd[1]: session-38.scope: Deactivated successfully. May 14 18:17:38.452258 systemd-logind[1686]: Session 38 logged out. Waiting for processes to exit. May 14 18:17:38.453220 systemd-logind[1686]: Removed session 38. May 14 18:17:43.564345 systemd[1]: Started sshd@36-10.200.8.47:22-10.200.16.10:47986.service - OpenSSH per-connection server daemon (10.200.16.10:47986). May 14 18:17:44.197915 sshd[6569]: Accepted publickey for core from 10.200.16.10 port 47986 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:44.199013 sshd-session[6569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:44.202998 systemd-logind[1686]: New session 39 of user core. May 14 18:17:44.212828 systemd[1]: Started session-39.scope - Session 39 of User core. May 14 18:17:44.685618 sshd[6571]: Connection closed by 10.200.16.10 port 47986 May 14 18:17:44.686027 sshd-session[6569]: pam_unix(sshd:session): session closed for user core May 14 18:17:44.688054 systemd[1]: sshd@36-10.200.8.47:22-10.200.16.10:47986.service: Deactivated successfully. May 14 18:17:44.689890 systemd[1]: session-39.scope: Deactivated successfully. May 14 18:17:44.691596 systemd-logind[1686]: Session 39 logged out. Waiting for processes to exit. May 14 18:17:44.692417 systemd-logind[1686]: Removed session 39. May 14 18:17:49.484644 containerd[1724]: time="2025-05-14T18:17:49.484583697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"a0ed63b852afeaf34862b946ea983b2fe9a9e343b5b39c7e2774dd1807c26bae\" pid:6601 exited_at:{seconds:1747246669 nanos:484326207}" May 14 18:17:49.801720 systemd[1]: Started sshd@37-10.200.8.47:22-10.200.16.10:34944.service - OpenSSH per-connection server daemon (10.200.16.10:34944). May 14 18:17:50.433599 sshd[6611]: Accepted publickey for core from 10.200.16.10 port 34944 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:50.434895 sshd-session[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:50.438879 systemd-logind[1686]: New session 40 of user core. May 14 18:17:50.442854 systemd[1]: Started session-40.scope - Session 40 of User core. May 14 18:17:50.921516 sshd[6613]: Connection closed by 10.200.16.10 port 34944 May 14 18:17:50.921940 sshd-session[6611]: pam_unix(sshd:session): session closed for user core May 14 18:17:50.924151 systemd[1]: sshd@37-10.200.8.47:22-10.200.16.10:34944.service: Deactivated successfully. May 14 18:17:50.925854 systemd[1]: session-40.scope: Deactivated successfully. May 14 18:17:50.926955 systemd-logind[1686]: Session 40 logged out. Waiting for processes to exit. May 14 18:17:50.927882 systemd-logind[1686]: Removed session 40. May 14 18:17:56.034853 systemd[1]: Started sshd@38-10.200.8.47:22-10.200.16.10:34958.service - OpenSSH per-connection server daemon (10.200.16.10:34958). May 14 18:17:56.666133 sshd[6628]: Accepted publickey for core from 10.200.16.10 port 34958 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:17:56.667193 sshd-session[6628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:17:56.671259 systemd-logind[1686]: New session 41 of user core. May 14 18:17:56.676826 systemd[1]: Started session-41.scope - Session 41 of User core. May 14 18:17:57.166838 sshd[6630]: Connection closed by 10.200.16.10 port 34958 May 14 18:17:57.167260 sshd-session[6628]: pam_unix(sshd:session): session closed for user core May 14 18:17:57.170088 systemd[1]: sshd@38-10.200.8.47:22-10.200.16.10:34958.service: Deactivated successfully. May 14 18:17:57.171588 systemd[1]: session-41.scope: Deactivated successfully. May 14 18:17:57.172238 systemd-logind[1686]: Session 41 logged out. Waiting for processes to exit. May 14 18:17:57.173380 systemd-logind[1686]: Removed session 41. May 14 18:18:01.666317 containerd[1724]: time="2025-05-14T18:18:01.666276161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"874f4ed79838138059ec96a8e520c1ef19966ca559abbef5c17da8665ae7fad6\" pid:6654 exited_at:{seconds:1747246681 nanos:666057236}" May 14 18:18:02.282517 systemd[1]: Started sshd@39-10.200.8.47:22-10.200.16.10:45570.service - OpenSSH per-connection server daemon (10.200.16.10:45570). May 14 18:18:02.920938 sshd[6663]: Accepted publickey for core from 10.200.16.10 port 45570 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:18:02.921905 sshd-session[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:18:02.925771 systemd-logind[1686]: New session 42 of user core. May 14 18:18:02.931816 systemd[1]: Started session-42.scope - Session 42 of User core. May 14 18:18:03.413045 sshd[6665]: Connection closed by 10.200.16.10 port 45570 May 14 18:18:03.413465 sshd-session[6663]: pam_unix(sshd:session): session closed for user core May 14 18:18:03.416279 systemd[1]: sshd@39-10.200.8.47:22-10.200.16.10:45570.service: Deactivated successfully. May 14 18:18:03.417814 systemd[1]: session-42.scope: Deactivated successfully. May 14 18:18:03.418444 systemd-logind[1686]: Session 42 logged out. Waiting for processes to exit. May 14 18:18:03.419548 systemd-logind[1686]: Removed session 42. May 14 18:18:07.467387 containerd[1724]: time="2025-05-14T18:18:07.467338418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2406af6d9d87ea7927353462bc3dbc21b3d7dd811a1f8d83a21470bc33741b6\" id:\"1c2efc452b6e38fc0ba739f84bb18d53fe23826756ad8ff87e0ee41688a437c6\" pid:6689 exited_at:{seconds:1747246687 nanos:467075292}" May 14 18:18:08.529459 systemd[1]: Started sshd@40-10.200.8.47:22-10.200.16.10:42840.service - OpenSSH per-connection server daemon (10.200.16.10:42840). May 14 18:18:09.168699 sshd[6703]: Accepted publickey for core from 10.200.16.10 port 42840 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:18:09.169886 sshd-session[6703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:18:09.173754 systemd-logind[1686]: New session 43 of user core. May 14 18:18:09.177834 systemd[1]: Started session-43.scope - Session 43 of User core. May 14 18:18:09.672236 sshd[6705]: Connection closed by 10.200.16.10 port 42840 May 14 18:18:09.672661 sshd-session[6703]: pam_unix(sshd:session): session closed for user core May 14 18:18:09.674879 systemd[1]: sshd@40-10.200.8.47:22-10.200.16.10:42840.service: Deactivated successfully. May 14 18:18:09.676602 systemd[1]: session-43.scope: Deactivated successfully. May 14 18:18:09.678270 systemd-logind[1686]: Session 43 logged out. Waiting for processes to exit. May 14 18:18:09.679075 systemd-logind[1686]: Removed session 43. May 14 18:18:11.828784 containerd[1724]: time="2025-05-14T18:18:11.828559470Z" level=warning msg="container event discarded" container=94dfd200458e1ca703a10b7561d15d84474e286e3b2fc6b3156be887df8f6ab4 type=CONTAINER_CREATED_EVENT May 14 18:18:11.839807 containerd[1724]: time="2025-05-14T18:18:11.839773983Z" level=warning msg="container event discarded" container=94dfd200458e1ca703a10b7561d15d84474e286e3b2fc6b3156be887df8f6ab4 type=CONTAINER_STARTED_EVENT May 14 18:18:11.993123 containerd[1724]: time="2025-05-14T18:18:11.993078437Z" level=warning msg="container event discarded" container=384c6081e5d58d4d9155e455b7dd424cb900d3287a1d0a80e3fde098a92d7c24 type=CONTAINER_CREATED_EVENT May 14 18:18:11.993123 containerd[1724]: time="2025-05-14T18:18:11.993119107Z" level=warning msg="container event discarded" container=384c6081e5d58d4d9155e455b7dd424cb900d3287a1d0a80e3fde098a92d7c24 type=CONTAINER_STARTED_EVENT May 14 18:18:12.245420 containerd[1724]: time="2025-05-14T18:18:12.245296368Z" level=warning msg="container event discarded" container=01e181feca02571df8984539a6d61b64c4c90d6d99d32a9f8940b67b0d79acb0 type=CONTAINER_CREATED_EVENT May 14 18:18:12.245420 containerd[1724]: time="2025-05-14T18:18:12.245349797Z" level=warning msg="container event discarded" container=01e181feca02571df8984539a6d61b64c4c90d6d99d32a9f8940b67b0d79acb0 type=CONTAINER_STARTED_EVENT May 14 18:18:12.782146 containerd[1724]: time="2025-05-14T18:18:12.782070355Z" level=warning msg="container event discarded" container=ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055 type=CONTAINER_CREATED_EVENT May 14 18:18:12.850152 containerd[1724]: time="2025-05-14T18:18:12.850069473Z" level=warning msg="container event discarded" container=5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d type=CONTAINER_CREATED_EVENT May 14 18:18:12.988448 containerd[1724]: time="2025-05-14T18:18:12.988398834Z" level=warning msg="container event discarded" container=ff99e5cc01a517e3df8a53d91eb8fa6e4ea740aaaedb642704c49f55f3bc6055 type=CONTAINER_STARTED_EVENT May 14 18:18:12.988448 containerd[1724]: time="2025-05-14T18:18:12.988443025Z" level=warning msg="container event discarded" container=5aff4a2f79283263230b14076692f2feb0a57b77d962e43cdd7975c01ea1747d type=CONTAINER_STARTED_EVENT May 14 18:18:13.038692 containerd[1724]: time="2025-05-14T18:18:13.038606821Z" level=warning msg="container event discarded" container=9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b type=CONTAINER_CREATED_EVENT May 14 18:18:13.176867 containerd[1724]: time="2025-05-14T18:18:13.176820957Z" level=warning msg="container event discarded" container=9ae98f1ba950c7eb270ecbad1aa93fd09e29af9601b920ebd25a285e9fb1670b type=CONTAINER_STARTED_EVENT May 14 18:18:14.787708 systemd[1]: Started sshd@41-10.200.8.47:22-10.200.16.10:42852.service - OpenSSH per-connection server daemon (10.200.16.10:42852). May 14 18:18:15.419231 sshd[6733]: Accepted publickey for core from 10.200.16.10 port 42852 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:18:15.420358 sshd-session[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:18:15.424345 systemd-logind[1686]: New session 44 of user core. May 14 18:18:15.425842 systemd[1]: Started session-44.scope - Session 44 of User core. May 14 18:18:15.908398 sshd[6735]: Connection closed by 10.200.16.10 port 42852 May 14 18:18:15.908879 sshd-session[6733]: pam_unix(sshd:session): session closed for user core May 14 18:18:15.911314 systemd[1]: sshd@41-10.200.8.47:22-10.200.16.10:42852.service: Deactivated successfully. May 14 18:18:15.912878 systemd[1]: session-44.scope: Deactivated successfully. May 14 18:18:15.914162 systemd-logind[1686]: Session 44 logged out. Waiting for processes to exit. May 14 18:18:15.915471 systemd-logind[1686]: Removed session 44. May 14 18:18:19.488439 containerd[1724]: time="2025-05-14T18:18:19.488391066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5836b1cb0dc0b33126e1607ff5cf8bddbee4bb0d4e96c9d906c9ea119bf2cc99\" id:\"3b57b1ce14fe2055ffd3c3af26ab3af91d9b65f8630c97af8911ab63d2914751\" pid:6758 exited_at:{seconds:1747246699 nanos:488197721}" May 14 18:18:21.021875 systemd[1]: Started sshd@42-10.200.8.47:22-10.200.16.10:37792.service - OpenSSH per-connection server daemon (10.200.16.10:37792). May 14 18:18:21.656067 sshd[6769]: Accepted publickey for core from 10.200.16.10 port 37792 ssh2: RSA SHA256:Hd5PQzbog4WemQvfyzJrpsOqYyUj2ZNV4jY0cJYqkm4 May 14 18:18:21.657098 sshd-session[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:18:21.660932 systemd-logind[1686]: New session 45 of user core. May 14 18:18:21.664826 systemd[1]: Started session-45.scope - Session 45 of User core. May 14 18:18:22.144884 sshd[6773]: Connection closed by 10.200.16.10 port 37792 May 14 18:18:22.145307 sshd-session[6769]: pam_unix(sshd:session): session closed for user core May 14 18:18:22.148253 systemd[1]: sshd@42-10.200.8.47:22-10.200.16.10:37792.service: Deactivated successfully. May 14 18:18:22.150061 systemd[1]: session-45.scope: Deactivated successfully. May 14 18:18:22.150861 systemd-logind[1686]: Session 45 logged out. Waiting for processes to exit. May 14 18:18:22.151891 systemd-logind[1686]: Removed session 45. May 14 18:18:24.814132 containerd[1724]: time="2025-05-14T18:18:24.814068476Z" level=warning msg="container event discarded" container=4824c80ec767c45161bb54d0c897fa6639bde09b461a36da96b1b58a87447034 type=CONTAINER_CREATED_EVENT May 14 18:18:24.814132 containerd[1724]: time="2025-05-14T18:18:24.814120989Z" level=warning msg="container event discarded" container=4824c80ec767c45161bb54d0c897fa6639bde09b461a36da96b1b58a87447034 type=CONTAINER_STARTED_EVENT May 14 18:18:25.294215 containerd[1724]: time="2025-05-14T18:18:25.294136637Z" level=warning msg="container event discarded" container=a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e type=CONTAINER_CREATED_EVENT May 14 18:18:25.595813 containerd[1724]: time="2025-05-14T18:18:25.595707359Z" level=warning msg="container event discarded" container=a97c9ed9fa254d9feaf0009a179256e87ff50e3e7a3b67ca4e3172667e10b89e type=CONTAINER_STARTED_EVENT May 14 18:18:26.303078 containerd[1724]: time="2025-05-14T18:18:26.303036838Z" level=warning msg="container event discarded" container=940ece7ba8f24d26882df59a666462c63a4291ce0ff8d1c3bac7acd7f85700c5 type=CONTAINER_CREATED_EVENT May 14 18:18:26.303078 containerd[1724]: time="2025-05-14T18:18:26.303067060Z" level=warning msg="container event discarded" container=940ece7ba8f24d26882df59a666462c63a4291ce0ff8d1c3bac7acd7f85700c5 type=CONTAINER_STARTED_EVENT