Oct 13 05:34:48.842583 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:34:48.842612 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:34:48.842623 kernel: BIOS-provided physical RAM map: Oct 13 05:34:48.842631 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:34:48.842638 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Oct 13 05:34:48.842645 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Oct 13 05:34:48.842656 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Oct 13 05:34:48.842664 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Oct 13 05:34:48.842671 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Oct 13 05:34:48.842679 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Oct 13 05:34:48.842687 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Oct 13 05:34:48.842694 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Oct 13 05:34:48.842702 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Oct 13 05:34:48.842709 kernel: printk: legacy bootconsole [earlyser0] enabled Oct 13 05:34:48.842720 kernel: NX (Execute Disable) protection: active Oct 13 05:34:48.842728 kernel: APIC: Static calls initialized Oct 13 05:34:48.842735 kernel: efi: EFI v2.7 by Microsoft Oct 13 05:34:48.842743 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f437518 RNG=0x3ffd2018 Oct 13 05:34:48.842751 kernel: random: crng init done Oct 13 05:34:48.842759 kernel: secureboot: Secure boot disabled Oct 13 05:34:48.842769 kernel: SMBIOS 3.1.0 present. Oct 13 05:34:48.842777 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Oct 13 05:34:48.842785 kernel: DMI: Memory slots populated: 2/2 Oct 13 05:34:48.842793 kernel: Hypervisor detected: Microsoft Hyper-V Oct 13 05:34:48.842801 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Oct 13 05:34:48.842809 kernel: Hyper-V: Nested features: 0x3e0101 Oct 13 05:34:48.842816 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Oct 13 05:34:48.842824 kernel: Hyper-V: Using hypercall for remote TLB flush Oct 13 05:34:48.842831 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 13 05:34:48.842839 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 13 05:34:48.842846 kernel: tsc: Detected 2299.998 MHz processor Oct 13 05:34:48.842856 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:34:48.842865 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:34:48.842873 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Oct 13 05:34:48.842882 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:34:48.842891 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:34:48.842900 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Oct 13 05:34:48.842908 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Oct 13 05:34:48.842918 kernel: Using GB pages for direct mapping Oct 13 05:34:48.842926 kernel: ACPI: Early table checksum verification disabled Oct 13 05:34:48.842938 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Oct 13 05:34:48.842946 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:48.842955 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:48.842965 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Oct 13 05:34:48.842974 kernel: ACPI: FACS 0x000000003FFFE000 000040 Oct 13 05:34:48.842983 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:48.842992 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:48.843001 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:48.843010 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Oct 13 05:34:48.843021 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Oct 13 05:34:48.843029 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:48.843037 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Oct 13 05:34:48.843046 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Oct 13 05:34:48.843055 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Oct 13 05:34:48.843063 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Oct 13 05:34:48.843071 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Oct 13 05:34:48.843082 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Oct 13 05:34:48.843090 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Oct 13 05:34:48.843099 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Oct 13 05:34:48.843108 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Oct 13 05:34:48.843116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Oct 13 05:34:48.843125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Oct 13 05:34:48.843134 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Oct 13 05:34:48.843144 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Oct 13 05:34:48.843153 kernel: Zone ranges: Oct 13 05:34:48.843161 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:34:48.843169 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 13 05:34:48.843178 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Oct 13 05:34:48.843187 kernel: Device empty Oct 13 05:34:48.843195 kernel: Movable zone start for each node Oct 13 05:34:48.843224 kernel: Early memory node ranges Oct 13 05:34:48.843233 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 13 05:34:48.843242 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Oct 13 05:34:48.843250 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Oct 13 05:34:48.843258 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Oct 13 05:34:48.843267 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Oct 13 05:34:48.843276 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Oct 13 05:34:48.843284 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:34:48.843295 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 13 05:34:48.843304 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Oct 13 05:34:48.843313 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Oct 13 05:34:48.843321 kernel: ACPI: PM-Timer IO Port: 0x408 Oct 13 05:34:48.843329 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:34:48.843338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:34:48.843346 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:34:48.843357 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Oct 13 05:34:48.843366 kernel: TSC deadline timer available Oct 13 05:34:48.843374 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:34:48.843383 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:34:48.843398 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:34:48.843407 kernel: CPU topo: Max. threads per core: 2 Oct 13 05:34:48.843415 kernel: CPU topo: Num. cores per package: 1 Oct 13 05:34:48.843425 kernel: CPU topo: Num. threads per package: 2 Oct 13 05:34:48.843434 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 13 05:34:48.843442 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Oct 13 05:34:48.843451 kernel: Booting paravirtualized kernel on Hyper-V Oct 13 05:34:48.843459 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:34:48.843469 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 13 05:34:48.843478 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 13 05:34:48.843488 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 13 05:34:48.843497 kernel: pcpu-alloc: [0] 0 1 Oct 13 05:34:48.843505 kernel: Hyper-V: PV spinlocks enabled Oct 13 05:34:48.843514 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:34:48.843524 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:34:48.843533 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:34:48.843543 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 13 05:34:48.843552 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:34:48.843560 kernel: Fallback order for Node 0: 0 Oct 13 05:34:48.843569 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Oct 13 05:34:48.843577 kernel: Policy zone: Normal Oct 13 05:34:48.843586 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:34:48.843595 kernel: software IO TLB: area num 2. Oct 13 05:34:48.843603 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 05:34:48.843614 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:34:48.843623 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:34:48.843631 kernel: Dynamic Preempt: voluntary Oct 13 05:34:48.843640 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:34:48.843650 kernel: rcu: RCU event tracing is enabled. Oct 13 05:34:48.843667 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 05:34:48.843676 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:34:48.843686 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:34:48.843695 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:34:48.843707 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:34:48.843716 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 05:34:48.843726 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:34:48.843735 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:34:48.843744 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:34:48.843753 kernel: Using NULL legacy PIC Oct 13 05:34:48.843764 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Oct 13 05:34:48.843773 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:34:48.843783 kernel: Console: colour dummy device 80x25 Oct 13 05:34:48.843792 kernel: printk: legacy console [tty1] enabled Oct 13 05:34:48.843801 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:34:48.843812 kernel: printk: legacy bootconsole [earlyser0] disabled Oct 13 05:34:48.843821 kernel: ACPI: Core revision 20240827 Oct 13 05:34:48.843830 kernel: Failed to register legacy timer interrupt Oct 13 05:34:48.843840 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:34:48.843849 kernel: x2apic enabled Oct 13 05:34:48.843858 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:34:48.843867 kernel: Hyper-V: Host Build 10.0.26100.1381-1-0 Oct 13 05:34:48.843878 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 13 05:34:48.843888 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Oct 13 05:34:48.843897 kernel: Hyper-V: Using IPI hypercalls Oct 13 05:34:48.843906 kernel: APIC: send_IPI() replaced with hv_send_ipi() Oct 13 05:34:48.843916 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Oct 13 05:34:48.843925 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Oct 13 05:34:48.843934 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Oct 13 05:34:48.843945 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Oct 13 05:34:48.843954 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Oct 13 05:34:48.843963 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 13 05:34:48.843973 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Oct 13 05:34:48.843982 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:34:48.843992 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 13 05:34:48.844001 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 13 05:34:48.844010 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:34:48.844021 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:34:48.844029 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:34:48.844038 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 13 05:34:48.844047 kernel: RETBleed: Vulnerable Oct 13 05:34:48.844056 kernel: Speculative Store Bypass: Vulnerable Oct 13 05:34:48.844065 kernel: active return thunk: its_return_thunk Oct 13 05:34:48.844074 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 13 05:34:48.844082 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:34:48.844091 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:34:48.844100 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:34:48.844112 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 13 05:34:48.844120 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 13 05:34:48.844129 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 13 05:34:48.844137 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Oct 13 05:34:48.844146 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Oct 13 05:34:48.844155 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Oct 13 05:34:48.844164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:34:48.844173 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 13 05:34:48.844182 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 13 05:34:48.844191 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 13 05:34:48.844211 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Oct 13 05:34:48.844220 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Oct 13 05:34:48.844229 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Oct 13 05:34:48.844248 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Oct 13 05:34:48.844257 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:34:48.844267 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:34:48.844276 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:34:48.844285 kernel: landlock: Up and running. Oct 13 05:34:48.844294 kernel: SELinux: Initializing. Oct 13 05:34:48.844303 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:34:48.844312 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:34:48.844325 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Oct 13 05:34:48.844336 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Oct 13 05:34:48.844347 kernel: signal: max sigframe size: 11952 Oct 13 05:34:48.844357 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:34:48.844366 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:34:48.844376 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:34:48.844386 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 13 05:34:48.844400 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:34:48.844411 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:34:48.844422 kernel: .... node #0, CPUs: #1 Oct 13 05:34:48.844432 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 05:34:48.844443 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 13 05:34:48.844454 kernel: Memory: 8108748K/8383228K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 269268K reserved, 0K cma-reserved) Oct 13 05:34:48.844465 kernel: devtmpfs: initialized Oct 13 05:34:48.844474 kernel: x86/mm: Memory block size: 128MB Oct 13 05:34:48.844487 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Oct 13 05:34:48.844498 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:34:48.844507 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 05:34:48.844517 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:34:48.844527 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:34:48.844537 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:34:48.844547 kernel: audit: type=2000 audit(1760333682.032:1): state=initialized audit_enabled=0 res=1 Oct 13 05:34:48.844562 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:34:48.844572 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:34:48.844582 kernel: cpuidle: using governor menu Oct 13 05:34:48.844593 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:34:48.844602 kernel: dca service started, version 1.12.1 Oct 13 05:34:48.844612 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Oct 13 05:34:48.844621 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Oct 13 05:34:48.844635 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:34:48.844645 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:34:48.844655 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:34:48.844664 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:34:48.844674 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:34:48.844684 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:34:48.844695 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:34:48.844709 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:34:48.844721 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:34:48.844731 kernel: ACPI: Interpreter enabled Oct 13 05:34:48.844740 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:34:48.844750 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:34:48.844760 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:34:48.844770 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 13 05:34:48.844784 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Oct 13 05:34:48.844796 kernel: iommu: Default domain type: Translated Oct 13 05:34:48.844805 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:34:48.844815 kernel: efivars: Registered efivars operations Oct 13 05:34:48.844825 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:34:48.844834 kernel: PCI: System does not support PCI Oct 13 05:34:48.844844 kernel: vgaarb: loaded Oct 13 05:34:48.844855 kernel: clocksource: Switched to clocksource tsc-early Oct 13 05:34:48.844864 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:34:48.844872 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:34:48.844881 kernel: pnp: PnP ACPI init Oct 13 05:34:48.844890 kernel: pnp: PnP ACPI: found 3 devices Oct 13 05:34:48.844899 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:34:48.844907 kernel: NET: Registered PF_INET protocol family Oct 13 05:34:48.844918 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 05:34:48.844932 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 13 05:34:48.844942 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:34:48.844951 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:34:48.844961 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 13 05:34:48.844971 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 13 05:34:48.844981 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 13 05:34:48.844993 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 13 05:34:48.845003 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:34:48.845013 kernel: NET: Registered PF_XDP protocol family Oct 13 05:34:48.845023 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:34:48.845033 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 13 05:34:48.845044 kernel: software IO TLB: mapped [mem 0x000000003a9c7000-0x000000003e9c7000] (64MB) Oct 13 05:34:48.845054 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Oct 13 05:34:48.845066 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Oct 13 05:34:48.845076 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 13 05:34:48.845086 kernel: clocksource: Switched to clocksource tsc Oct 13 05:34:48.845096 kernel: Initialise system trusted keyrings Oct 13 05:34:48.845106 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 13 05:34:48.845116 kernel: Key type asymmetric registered Oct 13 05:34:48.845126 kernel: Asymmetric key parser 'x509' registered Oct 13 05:34:48.845138 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:34:48.845148 kernel: io scheduler mq-deadline registered Oct 13 05:34:48.845158 kernel: io scheduler kyber registered Oct 13 05:34:48.845169 kernel: io scheduler bfq registered Oct 13 05:34:48.845179 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:34:48.845189 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:34:48.845220 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:34:48.845231 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 13 05:34:48.845243 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:34:48.845254 kernel: i8042: PNP: No PS/2 controller found. Oct 13 05:34:48.845437 kernel: rtc_cmos 00:02: registered as rtc0 Oct 13 05:34:48.845552 kernel: rtc_cmos 00:02: setting system clock to 2025-10-13T05:34:44 UTC (1760333684) Oct 13 05:34:48.845653 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Oct 13 05:34:48.845666 kernel: intel_pstate: Intel P-state driver initializing Oct 13 05:34:48.845676 kernel: efifb: probing for efifb Oct 13 05:34:48.845686 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 13 05:34:48.845695 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 13 05:34:48.845704 kernel: efifb: scrolling: redraw Oct 13 05:34:48.845714 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:34:48.845723 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 05:34:48.845733 kernel: fb0: EFI VGA frame buffer device Oct 13 05:34:48.845744 kernel: pstore: Using crash dump compression: deflate Oct 13 05:34:48.845754 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:34:48.845763 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:34:48.845773 kernel: Segment Routing with IPv6 Oct 13 05:34:48.845782 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:34:48.845791 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:34:48.845801 kernel: Key type dns_resolver registered Oct 13 05:34:48.845812 kernel: IPI shorthand broadcast: enabled Oct 13 05:34:48.845821 kernel: sched_clock: Marking stable (1794005606, 129229655)->(2275230799, -351995538) Oct 13 05:34:48.845831 kernel: registered taskstats version 1 Oct 13 05:34:48.845840 kernel: Loading compiled-in X.509 certificates Oct 13 05:34:48.845850 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:34:48.845859 kernel: Demotion targets for Node 0: null Oct 13 05:34:48.845869 kernel: Key type .fscrypt registered Oct 13 05:34:48.845881 kernel: Key type fscrypt-provisioning registered Oct 13 05:34:48.845890 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:34:48.845901 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:34:48.845911 kernel: ima: No architecture policies found Oct 13 05:34:48.845919 kernel: clk: Disabling unused clocks Oct 13 05:34:48.845929 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:34:48.845938 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:34:48.845949 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:34:48.845957 kernel: Run /init as init process Oct 13 05:34:48.845998 kernel: with arguments: Oct 13 05:34:48.846007 kernel: /init Oct 13 05:34:48.846016 kernel: with environment: Oct 13 05:34:48.846026 kernel: HOME=/ Oct 13 05:34:48.846035 kernel: TERM=linux Oct 13 05:34:48.846046 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:34:48.846056 kernel: hv_vmbus: Vmbus version:5.3 Oct 13 05:34:48.846066 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.846075 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 13 05:34:48.846084 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 13 05:34:48.846094 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.846103 kernel: PTP clock support registered Oct 13 05:34:48.846113 kernel: hv_utils: Registering HyperV Utility Driver Oct 13 05:34:48.846124 kernel: hv_vmbus: registering driver hv_utils Oct 13 05:34:48.846135 kernel: hv_utils: Shutdown IC version 3.2 Oct 13 05:34:48.846144 kernel: hv_utils: Heartbeat IC version 3.0 Oct 13 05:34:48.846154 kernel: hv_utils: TimeSync IC version 4.0 Oct 13 05:34:48.846164 kernel: hv_vmbus: registering driver hv_pci Oct 13 05:34:48.846570 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Oct 13 05:34:48.846590 kernel: SCSI subsystem initialized Oct 13 05:34:48.846705 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Oct 13 05:34:48.846717 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.846844 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Oct 13 05:34:48.846959 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Oct 13 05:34:48.847100 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Oct 13 05:34:48.847253 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Oct 13 05:34:48.847372 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Oct 13 05:34:48.847496 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Oct 13 05:34:48.847508 kernel: hv_vmbus: registering driver hv_storvsc Oct 13 05:34:48.847634 kernel: scsi host0: storvsc_host_t Oct 13 05:34:48.847770 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Oct 13 05:34:48.847782 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 05:34:48.847792 kernel: hv_vmbus: registering driver hid_hyperv Oct 13 05:34:48.847802 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Oct 13 05:34:48.847921 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 13 05:34:48.847933 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 13 05:34:48.847945 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Oct 13 05:34:48.848054 kernel: nvme nvme0: pci function c05b:00:00.0 Oct 13 05:34:48.848180 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Oct 13 05:34:48.848288 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 13 05:34:48.848302 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 13 05:34:48.848425 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 13 05:34:48.848439 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:34:48.848561 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 13 05:34:48.848585 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:34:48.848595 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:34:48.848605 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:34:48.848615 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:34:48.848627 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.848636 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.848646 kernel: raid6: avx512x4 gen() 40691 MB/s Oct 13 05:34:48.848656 kernel: raid6: avx512x2 gen() 40596 MB/s Oct 13 05:34:48.848665 kernel: raid6: avx512x1 gen() 25083 MB/s Oct 13 05:34:48.848674 kernel: raid6: avx2x4 gen() 34804 MB/s Oct 13 05:34:48.848684 kernel: raid6: avx2x2 gen() 36146 MB/s Oct 13 05:34:48.848693 kernel: raid6: avx2x1 gen() 30461 MB/s Oct 13 05:34:48.848705 kernel: raid6: using algorithm avx512x4 gen() 40691 MB/s Oct 13 05:34:48.848715 kernel: raid6: .... xor() 7439 MB/s, rmw enabled Oct 13 05:34:48.848724 kernel: raid6: using avx512x2 recovery algorithm Oct 13 05:34:48.848735 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.848744 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.848753 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.848762 kernel: xor: automatically using best checksumming function avx Oct 13 05:34:48.848773 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.848783 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:34:48.848793 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (1008) Oct 13 05:34:48.848803 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:34:48.848813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:34:48.848823 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 13 05:34:48.848832 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:34:48.848842 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:34:48.848853 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:48.848863 kernel: loop: module loaded Oct 13 05:34:48.848873 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:34:48.848882 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:34:48.848894 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:34:48.848908 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:34:48.848920 systemd[1]: Detected virtualization microsoft. Oct 13 05:34:48.848930 systemd[1]: Detected architecture x86-64. Oct 13 05:34:48.848940 systemd[1]: Running in initrd. Oct 13 05:34:48.848950 systemd[1]: No hostname configured, using default hostname. Oct 13 05:34:48.848961 systemd[1]: Hostname set to . Oct 13 05:34:48.848971 systemd[1]: Initializing machine ID from random generator. Oct 13 05:34:48.848983 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:34:48.848993 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:34:48.849003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:34:48.849013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:34:48.849024 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:34:48.849035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:34:48.849049 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:34:48.849060 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:34:48.849071 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:34:48.849081 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:34:48.849093 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:34:48.849103 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:34:48.849114 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:34:48.849124 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:34:48.849135 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:34:48.849145 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:34:48.849155 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:34:48.849167 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:34:48.849177 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:34:48.849187 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:34:48.849198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:34:48.849220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:34:48.849231 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:34:48.849241 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:34:48.849253 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:34:48.849263 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:34:48.849273 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:34:48.849284 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:34:48.849294 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:34:48.849304 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:34:48.849316 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:34:48.849326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:34:48.849337 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:34:48.849365 systemd-journald[1141]: Collecting audit messages is disabled. Oct 13 05:34:48.849393 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:34:48.849403 systemd-journald[1141]: Journal started Oct 13 05:34:48.849428 systemd-journald[1141]: Runtime Journal (/run/log/journal/865e450c14ab4f7db4893512e9c1e111) is 8M, max 158.9M, 150.9M free. Oct 13 05:34:48.852553 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:34:48.851866 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:34:48.855326 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:34:48.862450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:34:49.137371 systemd-tmpfiles[1155]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:34:49.141750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:34:49.150337 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:34:49.161314 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:34:49.181474 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:34:49.184752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:34:49.202520 systemd-modules-load[1146]: Inserted module 'br_netfilter' Oct 13 05:34:49.209006 kernel: Bridge firewalling registered Oct 13 05:34:49.203238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:34:49.204541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:34:49.224541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:34:49.228518 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:34:49.232299 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:34:49.238238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:34:49.257887 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:34:49.267610 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:34:49.287013 systemd-resolved[1174]: Positive Trust Anchors: Oct 13 05:34:49.287664 systemd-resolved[1174]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:34:49.287668 systemd-resolved[1174]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:34:49.287710 systemd-resolved[1174]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:34:49.328222 dracut-cmdline[1186]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:34:49.338739 systemd-resolved[1174]: Defaulting to hostname 'linux'. Oct 13 05:34:49.339598 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:34:49.346424 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:34:49.465244 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:34:49.548224 kernel: iscsi: registered transport (tcp) Oct 13 05:34:49.600562 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:34:49.600610 kernel: QLogic iSCSI HBA Driver Oct 13 05:34:49.662095 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:34:49.679724 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:34:49.685038 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:34:49.717680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:34:49.721438 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:34:49.735311 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:34:49.755916 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:34:49.761840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:34:49.797419 systemd-udevd[1409]: Using default interface naming scheme 'v257'. Oct 13 05:34:49.810293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:34:49.821064 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:34:49.834451 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:34:49.841378 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:34:49.851622 dracut-pre-trigger[1509]: rd.md=0: removing MD RAID activation Oct 13 05:34:49.878404 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:34:49.891575 systemd-networkd[1528]: lo: Link UP Oct 13 05:34:49.891583 systemd-networkd[1528]: lo: Gained carrier Oct 13 05:34:49.892434 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:34:49.896384 systemd[1]: Reached target network.target - Network. Oct 13 05:34:49.900054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:34:49.944741 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:34:49.950560 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:34:50.019112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:34:50.020561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:34:50.028275 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:34:50.034595 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:34:50.044223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#96 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 05:34:50.077280 kernel: hv_vmbus: registering driver hv_netvsc Oct 13 05:34:50.080394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:34:50.092360 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52360e62 (unnamed net_device) (uninitialized): VF slot 1 added Oct 13 05:34:50.102634 systemd-networkd[1528]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:34:50.102645 systemd-networkd[1528]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:34:50.105924 systemd-networkd[1528]: eth0: Link UP Oct 13 05:34:50.106124 systemd-networkd[1528]: eth0: Gained carrier Oct 13 05:34:50.106135 systemd-networkd[1528]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:34:50.120234 systemd-networkd[1528]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 13 05:34:50.161396 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:34:50.414140 kernel: AES CTR mode by8 optimization enabled Oct 13 05:34:50.489336 kernel: nvme nvme0: using unchecked data buffer Oct 13 05:34:50.684244 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Oct 13 05:34:50.696081 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:34:50.768974 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Oct 13 05:34:50.822227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Oct 13 05:34:50.860888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Oct 13 05:34:50.938989 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:34:50.941339 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:34:50.948268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:34:50.948748 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:34:50.994731 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:34:51.020400 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:34:51.108076 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Oct 13 05:34:51.108312 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Oct 13 05:34:51.111286 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Oct 13 05:34:51.113006 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Oct 13 05:34:51.118408 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Oct 13 05:34:51.122381 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Oct 13 05:34:51.127388 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Oct 13 05:34:51.129284 kernel: pci 7870:00:00.0: enabling Extended Tags Oct 13 05:34:51.144798 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Oct 13 05:34:51.144998 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Oct 13 05:34:51.149262 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Oct 13 05:34:51.167545 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Oct 13 05:34:51.176229 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Oct 13 05:34:51.180232 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52360e62 eth0: VF registering: eth1 Oct 13 05:34:51.180423 kernel: mana 7870:00:00.0 eth1: joined to eth0 Oct 13 05:34:51.184340 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Oct 13 05:34:51.184605 systemd-networkd[1528]: eth1: Interface name change detected, renamed to enP30832s1. Oct 13 05:34:51.288218 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Oct 13 05:34:51.293620 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:34:51.293885 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52360e62 eth0: Data path switched to VF: enP30832s1 Oct 13 05:34:51.294265 systemd-networkd[1528]: enP30832s1: Link UP Oct 13 05:34:51.295476 systemd-networkd[1528]: enP30832s1: Gained carrier Oct 13 05:34:51.645448 systemd-networkd[1528]: eth0: Gained IPv6LL Oct 13 05:34:52.007450 disk-uuid[1707]: Warning: The kernel is still using the old partition table. Oct 13 05:34:52.007450 disk-uuid[1707]: The new table will be used at the next reboot or after you Oct 13 05:34:52.007450 disk-uuid[1707]: run partprobe(8) or kpartx(8) Oct 13 05:34:52.007450 disk-uuid[1707]: The operation has completed successfully. Oct 13 05:34:52.019055 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:34:52.019165 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:34:52.020840 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:34:52.067233 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1753) Oct 13 05:34:52.070826 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:34:52.070861 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:34:52.138658 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:34:52.138701 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:34:52.138713 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:34:52.145256 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:34:52.146108 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:34:52.149765 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:34:53.665579 ignition[1772]: Ignition 2.22.0 Oct 13 05:34:53.665592 ignition[1772]: Stage: fetch-offline Oct 13 05:34:53.667390 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:34:53.665705 ignition[1772]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:34:53.672656 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 05:34:53.665713 ignition[1772]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:34:53.665798 ignition[1772]: parsed url from cmdline: "" Oct 13 05:34:53.665801 ignition[1772]: no config URL provided Oct 13 05:34:53.665805 ignition[1772]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:34:53.665811 ignition[1772]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:34:53.665816 ignition[1772]: failed to fetch config: resource requires networking Oct 13 05:34:53.666006 ignition[1772]: Ignition finished successfully Oct 13 05:34:53.706975 ignition[1780]: Ignition 2.22.0 Oct 13 05:34:53.706987 ignition[1780]: Stage: fetch Oct 13 05:34:53.707216 ignition[1780]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:34:53.707224 ignition[1780]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:34:53.707297 ignition[1780]: parsed url from cmdline: "" Oct 13 05:34:53.707300 ignition[1780]: no config URL provided Oct 13 05:34:53.707305 ignition[1780]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:34:53.707310 ignition[1780]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:34:53.707330 ignition[1780]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 13 05:34:53.775670 ignition[1780]: GET result: OK Oct 13 05:34:53.775739 ignition[1780]: config has been read from IMDS userdata Oct 13 05:34:53.775767 ignition[1780]: parsing config with SHA512: 63db20bec699036f04f547db3f8b3badf061877ee3145d6699f35f5c27010565819ce7560942fd0e9085a8f5d412ab0a32d4f234baa98b49eee57965eb671bfe Oct 13 05:34:53.782384 unknown[1780]: fetched base config from "system" Oct 13 05:34:53.782669 ignition[1780]: fetch: fetch complete Oct 13 05:34:53.782391 unknown[1780]: fetched base config from "system" Oct 13 05:34:53.782673 ignition[1780]: fetch: fetch passed Oct 13 05:34:53.782396 unknown[1780]: fetched user config from "azure" Oct 13 05:34:53.782697 ignition[1780]: Ignition finished successfully Oct 13 05:34:53.785272 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 05:34:53.788618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:34:53.814982 ignition[1786]: Ignition 2.22.0 Oct 13 05:34:53.814991 ignition[1786]: Stage: kargs Oct 13 05:34:53.815193 ignition[1786]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:34:53.817841 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:34:53.815217 ignition[1786]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:34:53.820033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:34:53.816145 ignition[1786]: kargs: kargs passed Oct 13 05:34:53.816186 ignition[1786]: Ignition finished successfully Oct 13 05:34:53.852909 ignition[1793]: Ignition 2.22.0 Oct 13 05:34:53.852921 ignition[1793]: Stage: disks Oct 13 05:34:53.855281 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:34:53.853128 ignition[1793]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:34:53.857779 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:34:53.853135 ignition[1793]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:34:53.861854 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:34:53.854516 ignition[1793]: disks: disks passed Oct 13 05:34:53.867263 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:34:53.854543 ignition[1793]: Ignition finished successfully Oct 13 05:34:53.870105 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:34:53.875801 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:34:53.880940 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:34:54.070575 systemd-fsck[1801]: ROOT: clean, 15/7340400 files, 470001/7359488 blocks Oct 13 05:34:54.075219 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:34:54.081782 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:34:57.109218 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:34:57.109457 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:34:57.112759 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:34:57.156922 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:34:57.174620 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:34:57.180326 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 13 05:34:57.184677 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:34:57.185072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:34:57.195218 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:34:57.200807 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1811) Oct 13 05:34:57.204283 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:34:57.237285 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:34:57.237461 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:34:57.246661 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:34:57.246696 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:34:57.246708 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:34:57.249112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:34:57.918672 coreos-metadata[1813]: Oct 13 05:34:57.918 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 05:34:57.927475 coreos-metadata[1813]: Oct 13 05:34:57.921 INFO Fetch successful Oct 13 05:34:57.927475 coreos-metadata[1813]: Oct 13 05:34:57.921 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 13 05:34:57.935257 coreos-metadata[1813]: Oct 13 05:34:57.929 INFO Fetch successful Oct 13 05:34:57.948581 coreos-metadata[1813]: Oct 13 05:34:57.948 INFO wrote hostname ci-4487.0.0-a-8f52350bac to /sysroot/etc/hostname Oct 13 05:34:57.950500 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:34:58.302761 initrd-setup-root[1841]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:34:58.370726 initrd-setup-root[1848]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:34:58.375738 initrd-setup-root[1855]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:34:58.403616 initrd-setup-root[1862]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:35:00.551682 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:35:00.556561 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:35:00.563846 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:35:00.591238 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:35:00.596227 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:35:00.618311 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:35:00.628224 ignition[1930]: INFO : Ignition 2.22.0 Oct 13 05:35:00.628224 ignition[1930]: INFO : Stage: mount Oct 13 05:35:00.631349 ignition[1930]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:35:00.631349 ignition[1930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:35:00.631349 ignition[1930]: INFO : mount: mount passed Oct 13 05:35:00.631349 ignition[1930]: INFO : Ignition finished successfully Oct 13 05:35:00.630549 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:35:00.641919 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:35:00.660623 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:35:00.868226 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1944) Oct 13 05:35:00.916904 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:35:00.916949 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:35:00.928181 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:35:00.928228 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:35:00.929617 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:35:00.931696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:35:00.956069 ignition[1960]: INFO : Ignition 2.22.0 Oct 13 05:35:00.956069 ignition[1960]: INFO : Stage: files Oct 13 05:35:00.958135 ignition[1960]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:35:00.958135 ignition[1960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:35:00.962947 ignition[1960]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:35:01.036489 ignition[1960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:35:01.036489 ignition[1960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:35:01.136197 ignition[1960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:35:01.141305 ignition[1960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:35:01.141305 ignition[1960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:35:01.138846 unknown[1960]: wrote ssh authorized keys file for user: core Oct 13 05:35:01.281077 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:35:01.283858 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:35:01.331477 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:35:01.373364 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:35:01.379295 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:35:01.379295 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:35:01.379295 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:35:01.379295 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:35:01.379295 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:35:01.379295 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:35:01.379295 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:35:01.379295 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:35:01.405235 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:35:01.405235 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:35:01.405235 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:35:01.405235 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:35:01.405235 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:35:01.405235 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 13 05:35:01.661795 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:35:02.293085 ignition[1960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:35:02.293085 ignition[1960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:35:02.389970 ignition[1960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:35:02.396008 ignition[1960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:35:02.396008 ignition[1960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:35:02.396008 ignition[1960]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:35:02.406726 ignition[1960]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:35:02.406726 ignition[1960]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:35:02.406726 ignition[1960]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:35:02.406726 ignition[1960]: INFO : files: files passed Oct 13 05:35:02.406726 ignition[1960]: INFO : Ignition finished successfully Oct 13 05:35:02.400631 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:35:02.409968 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:35:02.426789 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:35:02.428626 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:35:02.431539 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:35:02.453866 initrd-setup-root-after-ignition[1991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:35:02.453866 initrd-setup-root-after-ignition[1991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:35:02.461288 initrd-setup-root-after-ignition[1995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:35:02.461053 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:35:02.465237 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:35:02.470280 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:35:02.506548 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:35:02.506641 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:35:02.512588 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:35:02.515916 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:35:02.522282 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:35:02.523309 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:35:02.548093 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:35:02.552671 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:35:02.567941 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:35:02.568081 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:35:02.568683 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:35:02.569090 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:35:02.569488 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:35:02.569596 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:35:02.570308 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:35:02.570649 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:35:02.570983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:35:02.571336 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:35:02.596348 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:35:02.596615 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:35:02.596886 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:35:02.597410 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:35:02.597736 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:35:02.607600 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:35:02.611338 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:35:02.616517 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:35:02.616638 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:35:02.622502 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:35:02.627010 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:35:02.630212 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:35:02.632900 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:35:02.637413 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:35:02.637523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:35:02.645621 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:35:02.647146 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:35:02.650636 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:35:02.651884 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:35:02.654959 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 13 05:35:02.655074 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:35:02.657352 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:35:02.670427 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:35:02.679486 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:35:02.679642 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:35:02.686120 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:35:02.693171 ignition[2015]: INFO : Ignition 2.22.0 Oct 13 05:35:02.693171 ignition[2015]: INFO : Stage: umount Oct 13 05:35:02.693171 ignition[2015]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:35:02.693171 ignition[2015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:35:02.693171 ignition[2015]: INFO : umount: umount passed Oct 13 05:35:02.693171 ignition[2015]: INFO : Ignition finished successfully Oct 13 05:35:02.686675 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:35:02.693340 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:35:02.694164 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:35:02.713685 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:35:02.713781 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:35:02.721683 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:35:02.723011 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:35:02.728353 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:35:02.728429 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:35:02.732296 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:35:02.732339 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:35:02.737147 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 05:35:02.737194 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 05:35:02.740473 systemd[1]: Stopped target network.target - Network. Oct 13 05:35:02.746076 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:35:02.747453 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:35:02.749810 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:35:02.756830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:35:02.758337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:35:02.763368 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:35:02.768249 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:35:02.769710 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:35:02.769749 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:35:02.774300 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:35:02.774332 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:35:02.778271 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:35:02.778320 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:35:02.780863 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:35:02.780904 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:35:02.785342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:35:02.787858 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:35:02.790384 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:35:02.795432 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:35:02.795524 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:35:02.802029 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:35:02.802164 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:35:02.808121 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:35:02.812437 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:35:02.812523 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:35:02.819509 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:35:02.825248 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:35:02.825310 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:35:02.828309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:35:02.828355 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:35:02.833287 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:35:02.833331 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:35:02.833853 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:35:02.849602 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:35:02.849728 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:35:02.853865 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:35:02.853913 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:35:02.855110 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:35:02.855134 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:35:02.859276 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:35:02.859326 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:35:02.863518 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:35:02.863558 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:35:02.866184 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:35:02.866244 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:35:02.873150 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:35:02.873257 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:35:02.873298 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:35:02.873350 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:35:02.873382 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:35:02.873595 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:35:02.873623 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:35:02.873656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:35:02.873684 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:35:02.873923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:35:02.873949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:35:02.906829 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:35:02.906917 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:35:02.990219 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52360e62 eth0: Data path switched from VF: enP30832s1 Oct 13 05:35:02.990467 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:35:02.993683 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:35:02.993769 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:35:03.222250 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:35:03.222358 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:35:03.226552 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:35:03.230266 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:35:03.230319 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:35:03.234306 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:35:03.269478 systemd[1]: Switching root. Oct 13 05:35:03.346395 systemd-journald[1141]: Journal stopped Oct 13 05:35:12.538278 systemd-journald[1141]: Received SIGTERM from PID 1 (systemd). Oct 13 05:35:12.538309 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:35:12.538323 kernel: SELinux: policy capability open_perms=1 Oct 13 05:35:12.538333 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:35:12.538342 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:35:12.538351 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:35:12.538364 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:35:12.538374 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:35:12.538383 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:35:12.538393 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:35:12.538402 kernel: audit: type=1403 audit(1760333705.130:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:35:12.538413 systemd[1]: Successfully loaded SELinux policy in 279.815ms. Oct 13 05:35:12.538426 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.425ms. Oct 13 05:35:12.538439 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:35:12.538451 systemd[1]: Detected virtualization microsoft. Oct 13 05:35:12.538463 systemd[1]: Detected architecture x86-64. Oct 13 05:35:12.538474 systemd[1]: Detected first boot. Oct 13 05:35:12.538485 systemd[1]: Hostname set to . Oct 13 05:35:12.538495 systemd[1]: Initializing machine ID from random generator. Oct 13 05:35:12.538506 zram_generator::config[2059]: No configuration found. Oct 13 05:35:12.538517 kernel: Guest personality initialized and is inactive Oct 13 05:35:12.538529 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Oct 13 05:35:12.538539 kernel: Initialized host personality Oct 13 05:35:12.538549 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:35:12.538560 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:35:12.538571 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:35:12.538581 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:35:12.538593 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:35:12.538605 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:35:12.538616 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:35:12.538627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:35:12.538638 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:35:12.538649 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:35:12.538662 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:35:12.538672 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:35:12.538682 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:35:12.538693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:35:12.538704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:35:12.538715 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:35:12.538728 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:35:12.538740 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:35:12.538753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:35:12.538764 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:35:12.538775 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:35:12.538787 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:35:12.538801 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:35:12.538812 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:35:12.538823 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:35:12.538834 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:35:12.538845 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:35:12.538856 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:35:12.538867 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:35:12.538880 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:35:12.538891 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:35:12.538902 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:35:12.538913 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:35:12.538924 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:35:12.538937 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:35:12.538948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:35:12.538959 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:35:12.538970 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:35:12.538981 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:35:12.538993 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:35:12.539005 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:12.539016 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:35:12.539026 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:35:12.539038 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:35:12.539050 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:35:12.539061 systemd[1]: Reached target machines.target - Containers. Oct 13 05:35:12.539074 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:35:12.539085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:35:12.539096 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:35:12.539107 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:35:12.539118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:35:12.539129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:35:12.539140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:35:12.539154 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:35:12.539164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:35:12.539176 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:35:12.539187 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:35:12.539198 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:35:12.539246 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:35:12.539257 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:35:12.539270 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:35:12.539281 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:35:12.539292 kernel: fuse: init (API version 7.41) Oct 13 05:35:12.539303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:35:12.539314 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:35:12.539326 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:35:12.539339 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:35:12.539349 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:35:12.539360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:12.539371 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:35:12.539383 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:35:12.539394 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:35:12.539406 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:35:12.539419 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:35:12.539430 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:35:12.539441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:35:12.539452 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:35:12.539463 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:35:12.539475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:35:12.539486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:35:12.539498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:35:12.539509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:35:12.539520 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:35:12.539532 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:35:12.539543 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:35:12.539554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:35:12.539565 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:35:12.539592 systemd-journald[2142]: Collecting audit messages is disabled. Oct 13 05:35:12.539616 systemd-journald[2142]: Journal started Oct 13 05:35:12.539641 systemd-journald[2142]: Runtime Journal (/run/log/journal/716eede3d9f540af84550bb6b2b9e058) is 8M, max 158.9M, 150.9M free. Oct 13 05:35:11.918997 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:35:11.929752 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 13 05:35:11.930105 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:35:12.545326 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:35:12.547944 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:35:12.554644 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:35:12.579843 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:35:12.585128 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:35:12.591638 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:35:12.594548 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:35:12.594586 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:35:12.599411 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:35:12.602395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:35:12.618363 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:35:12.624326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:35:12.626816 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:35:12.630048 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:35:12.632505 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:35:12.640294 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:35:12.650810 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:35:12.654945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:35:12.661368 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:35:12.664928 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:35:12.669585 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:35:12.680454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:35:12.683720 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:35:12.686813 kernel: ACPI: bus type drm_connector registered Oct 13 05:35:12.688821 systemd-journald[2142]: Time spent on flushing to /var/log/journal/716eede3d9f540af84550bb6b2b9e058 is 13.049ms for 980 entries. Oct 13 05:35:12.688821 systemd-journald[2142]: System Journal (/var/log/journal/716eede3d9f540af84550bb6b2b9e058) is 8M, max 2.6G, 2.6G free. Oct 13 05:35:12.773835 systemd-journald[2142]: Received client request to flush runtime journal. Oct 13 05:35:12.773880 kernel: loop1: detected capacity change from 0 to 27752 Oct 13 05:35:12.690686 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:35:12.691282 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:35:12.695262 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:35:12.697401 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:35:12.701584 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:35:12.711264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:35:12.774676 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:35:12.867902 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:35:12.937370 systemd-tmpfiles[2189]: ACLs are not supported, ignoring. Oct 13 05:35:12.937388 systemd-tmpfiles[2189]: ACLs are not supported, ignoring. Oct 13 05:35:12.939937 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:35:12.943226 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:35:13.757079 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:35:13.758736 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:35:13.845247 kernel: loop2: detected capacity change from 0 to 128048 Oct 13 05:35:14.291503 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:35:14.428804 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:35:14.432651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:35:14.436675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:35:14.457120 systemd-tmpfiles[2221]: ACLs are not supported, ignoring. Oct 13 05:35:14.457140 systemd-tmpfiles[2221]: ACLs are not supported, ignoring. Oct 13 05:35:14.459618 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:35:14.464826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:35:14.494657 systemd-udevd[2224]: Using default interface naming scheme 'v257'. Oct 13 05:35:14.537065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:35:14.582181 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:35:14.662244 kernel: loop3: detected capacity change from 0 to 229808 Oct 13 05:35:14.727149 systemd-resolved[2220]: Positive Trust Anchors: Oct 13 05:35:14.727166 systemd-resolved[2220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:35:14.727170 systemd-resolved[2220]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:35:14.727218 systemd-resolved[2220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:35:14.735215 kernel: loop4: detected capacity change from 0 to 110984 Oct 13 05:35:15.003953 systemd-resolved[2220]: Using system hostname 'ci-4487.0.0-a-8f52350bac'. Oct 13 05:35:15.005062 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:35:15.008339 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:35:15.418231 kernel: loop5: detected capacity change from 0 to 27752 Oct 13 05:35:15.520225 kernel: loop6: detected capacity change from 0 to 128048 Oct 13 05:35:15.616224 kernel: loop7: detected capacity change from 0 to 229808 Oct 13 05:35:15.631219 kernel: loop1: detected capacity change from 0 to 110984 Oct 13 05:35:15.823467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:35:15.831331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:35:15.864012 (sd-merge)[2235]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Oct 13 05:35:15.869695 (sd-merge)[2235]: Merged extensions into '/usr'. Oct 13 05:35:15.879537 systemd[1]: Reload requested from client PID 2188 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:35:15.879555 systemd[1]: Reloading... Oct 13 05:35:15.944385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 05:35:15.982230 zram_generator::config[2300]: No configuration found. Oct 13 05:35:16.002244 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:35:16.022051 kernel: hv_vmbus: registering driver hyperv_fb Oct 13 05:35:16.027228 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 13 05:35:16.027290 kernel: hv_vmbus: registering driver hv_balloon Oct 13 05:35:16.027310 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 13 05:35:16.030625 kernel: Console: switching to colour dummy device 80x25 Oct 13 05:35:16.033239 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 13 05:35:16.039834 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 05:35:16.255256 systemd-networkd[2245]: lo: Link UP Oct 13 05:35:16.255268 systemd-networkd[2245]: lo: Gained carrier Oct 13 05:35:16.256709 systemd-networkd[2245]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:35:16.256720 systemd-networkd[2245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:35:16.259224 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Oct 13 05:35:16.265542 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:35:16.265811 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52360e62 eth0: Data path switched to VF: enP30832s1 Oct 13 05:35:16.266514 systemd-networkd[2245]: enP30832s1: Link UP Oct 13 05:35:16.266604 systemd-networkd[2245]: eth0: Link UP Oct 13 05:35:16.266607 systemd-networkd[2245]: eth0: Gained carrier Oct 13 05:35:16.266621 systemd-networkd[2245]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:35:16.272826 systemd-networkd[2245]: enP30832s1: Gained carrier Oct 13 05:35:16.280263 systemd-networkd[2245]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 13 05:35:16.315356 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:35:16.315487 systemd[1]: Reloading finished in 435 ms. Oct 13 05:35:16.338261 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:35:16.347236 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:35:16.371661 systemd[1]: Reached target network.target - Network. Oct 13 05:35:16.381617 systemd[1]: Starting ensure-sysext.service... Oct 13 05:35:16.384377 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:35:16.388083 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:35:16.398417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:35:16.412240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:35:16.430496 systemd[1]: Reload requested from client PID 2383 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:35:16.430576 systemd[1]: Reloading... Oct 13 05:35:16.490233 zram_generator::config[2418]: No configuration found. Oct 13 05:35:16.627809 systemd-tmpfiles[2386]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:35:16.627838 systemd-tmpfiles[2386]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:35:16.628094 systemd-tmpfiles[2386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:35:16.628333 systemd-tmpfiles[2386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:35:16.630730 systemd-tmpfiles[2386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:35:16.633328 systemd-tmpfiles[2386]: ACLs are not supported, ignoring. Oct 13 05:35:16.633434 systemd-tmpfiles[2386]: ACLs are not supported, ignoring. Oct 13 05:35:16.674457 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Oct 13 05:35:16.695726 systemd[1]: Reloading finished in 264 ms. Oct 13 05:35:16.739648 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:16.739867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:35:16.740827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:35:16.746424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:35:16.751934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:35:16.752559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:35:16.753344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:35:16.753660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:16.755676 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:35:16.759943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:35:16.771470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:35:16.775736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:35:16.775881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:35:16.780572 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:35:16.780721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:35:16.786131 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:16.786770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:35:16.788396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:35:16.793421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:35:16.799493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:35:16.803367 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:35:16.803513 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:35:16.803633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:16.805066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:35:16.805360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:35:16.808402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:35:16.808551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:35:16.811539 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:35:16.812402 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:35:16.815008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:35:16.815234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:35:16.831809 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:16.832551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:35:16.835165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:35:16.840420 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:35:16.844456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:35:16.850018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:35:16.851981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:35:16.852103 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:35:16.853348 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:35:16.856468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:35:16.858700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:16.860550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:35:16.861280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:35:16.868600 systemd[1]: Finished ensure-sysext.service. Oct 13 05:35:16.880158 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:35:16.884373 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:35:16.886903 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:35:16.887331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:35:16.889613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:35:16.889774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:35:16.890933 systemd-tmpfiles[2386]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:35:16.890944 systemd-tmpfiles[2386]: Skipping /boot Oct 13 05:35:16.910442 systemd-tmpfiles[2386]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:35:16.910452 systemd-tmpfiles[2386]: Skipping /boot Oct 13 05:35:16.917082 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Oct 13 05:35:16.918480 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:35:16.920008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:35:16.920061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:35:16.930422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:35:16.934362 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:35:16.941335 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:35:16.945445 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:35:16.946796 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:35:16.951245 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:35:17.001395 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:35:17.054163 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:35:17.512484 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:35:17.882549 augenrules[2548]: No rules Oct 13 05:35:17.883526 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:35:17.883723 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:35:17.885483 systemd-networkd[2245]: eth0: Gained IPv6LL Oct 13 05:35:17.887413 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:35:17.889507 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:35:19.031680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:35:20.577990 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:35:20.582290 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:35:28.570805 ldconfig[2522]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:35:28.582877 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:35:28.586647 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:35:28.623432 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:35:28.625272 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:35:28.628374 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:35:28.630083 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:35:28.633246 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:35:28.636361 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:35:28.639331 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:35:28.642259 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:35:28.643864 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:35:28.643902 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:35:28.645172 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:35:28.691393 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:35:28.696214 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:35:28.699469 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:35:28.701446 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:35:28.704258 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:35:28.711641 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:35:28.714549 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:35:28.717811 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:35:28.720946 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:35:28.722559 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:35:28.724102 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:35:28.724129 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:35:28.740551 systemd[1]: Starting chronyd.service - NTP client/server... Oct 13 05:35:28.745073 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:35:28.750387 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 05:35:28.755335 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:35:28.758922 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:35:28.766280 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:35:28.772348 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:35:28.774877 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:35:28.775779 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:35:28.779146 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Oct 13 05:35:28.786382 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Oct 13 05:35:28.790409 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Oct 13 05:35:28.792351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:35:28.798379 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:35:28.803538 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:35:28.808361 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:35:28.809903 jq[2569]: false Oct 13 05:35:28.816486 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:35:28.821234 chronyd[2564]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Oct 13 05:35:28.823974 KVP[2575]: KVP starting; pid is:2575 Oct 13 05:35:28.824171 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:35:28.831312 google_oslogin_nss_cache[2574]: oslogin_cache_refresh[2574]: Refreshing passwd entry cache Oct 13 05:35:28.828886 oslogin_cache_refresh[2574]: Refreshing passwd entry cache Oct 13 05:35:28.832974 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:35:28.834891 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:35:28.836721 KVP[2575]: KVP LIC Version: 3.1 Oct 13 05:35:28.837707 kernel: hv_utils: KVP IC version 4.0 Oct 13 05:35:28.837973 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:35:28.838925 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:35:28.843508 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:35:28.849251 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:35:28.850675 google_oslogin_nss_cache[2574]: oslogin_cache_refresh[2574]: Failure getting users, quitting Oct 13 05:35:28.850675 google_oslogin_nss_cache[2574]: oslogin_cache_refresh[2574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:35:28.850675 google_oslogin_nss_cache[2574]: oslogin_cache_refresh[2574]: Refreshing group entry cache Oct 13 05:35:28.850003 oslogin_cache_refresh[2574]: Failure getting users, quitting Oct 13 05:35:28.850019 oslogin_cache_refresh[2574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:35:28.850060 oslogin_cache_refresh[2574]: Refreshing group entry cache Oct 13 05:35:28.854149 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:35:28.855397 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:35:28.874680 oslogin_cache_refresh[2574]: Failure getting groups, quitting Oct 13 05:35:28.872518 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:35:28.875898 google_oslogin_nss_cache[2574]: oslogin_cache_refresh[2574]: Failure getting groups, quitting Oct 13 05:35:28.875898 google_oslogin_nss_cache[2574]: oslogin_cache_refresh[2574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:35:28.874697 oslogin_cache_refresh[2574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:35:28.873310 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:35:28.880519 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:35:28.880717 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:35:28.889532 extend-filesystems[2573]: Found /dev/nvme0n1p6 Oct 13 05:35:28.892298 jq[2588]: true Oct 13 05:35:28.896001 chronyd[2564]: Timezone right/UTC failed leap second check, ignoring Oct 13 05:35:28.896371 chronyd[2564]: Loaded seccomp filter (level 2) Oct 13 05:35:28.896473 systemd[1]: Started chronyd.service - NTP client/server. Oct 13 05:35:28.907193 (ntainerd)[2611]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:35:28.912772 jq[2616]: true Oct 13 05:35:28.916267 extend-filesystems[2573]: Found /dev/nvme0n1p9 Oct 13 05:35:28.917476 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:35:28.917689 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:35:28.921057 extend-filesystems[2573]: Checking size of /dev/nvme0n1p9 Oct 13 05:35:28.930967 update_engine[2586]: I20251013 05:35:28.930896 2586 main.cc:92] Flatcar Update Engine starting Oct 13 05:35:28.936300 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:35:28.953988 tar[2592]: linux-amd64/LICENSE Oct 13 05:35:28.954183 tar[2592]: linux-amd64/helm Oct 13 05:35:29.012886 extend-filesystems[2573]: Resized partition /dev/nvme0n1p9 Oct 13 05:35:29.025611 systemd-logind[2582]: New seat seat0. Oct 13 05:35:29.033771 systemd-logind[2582]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 13 05:35:29.033953 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:35:29.048245 extend-filesystems[2654]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:35:29.114677 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 7359488 to 7376891 blocks Oct 13 05:35:29.132637 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 7376891 Oct 13 05:35:29.133845 bash[2645]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:35:29.133496 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:35:29.139272 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:35:29.183639 extend-filesystems[2654]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 13 05:35:29.183639 extend-filesystems[2654]: old_desc_blocks = 4, new_desc_blocks = 4 Oct 13 05:35:29.183639 extend-filesystems[2654]: The filesystem on /dev/nvme0n1p9 is now 7376891 (4k) blocks long. Oct 13 05:35:29.181967 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:35:29.194996 extend-filesystems[2573]: Resized filesystem in /dev/nvme0n1p9 Oct 13 05:35:29.182175 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:35:29.387959 dbus-daemon[2567]: [system] SELinux support is enabled Oct 13 05:35:29.388345 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:35:29.395035 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:35:29.395065 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:35:29.399852 dbus-daemon[2567]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:35:29.422911 update_engine[2586]: I20251013 05:35:29.404518 2586 update_check_scheduler.cc:74] Next update check in 5m50s Oct 13 05:35:29.400330 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:35:29.400354 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:35:29.402451 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:35:29.407689 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:35:29.490281 tar[2592]: linux-amd64/README.md Oct 13 05:35:29.504798 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:35:29.583337 coreos-metadata[2566]: Oct 13 05:35:29.583 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 05:35:29.586518 coreos-metadata[2566]: Oct 13 05:35:29.586 INFO Fetch successful Oct 13 05:35:29.590478 coreos-metadata[2566]: Oct 13 05:35:29.590 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 13 05:35:29.594228 coreos-metadata[2566]: Oct 13 05:35:29.594 INFO Fetch successful Oct 13 05:35:29.594633 coreos-metadata[2566]: Oct 13 05:35:29.594 INFO Fetching http://168.63.129.16/machine/8d90de43-2785-4937-a54d-fae5628602f3/452cf10c%2Dcdb5%2D4617%2D8db7%2D7d7a70788b71.%5Fci%2D4487.0.0%2Da%2D8f52350bac?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 13 05:35:29.596014 coreos-metadata[2566]: Oct 13 05:35:29.595 INFO Fetch successful Oct 13 05:35:29.596255 coreos-metadata[2566]: Oct 13 05:35:29.596 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 13 05:35:29.603803 coreos-metadata[2566]: Oct 13 05:35:29.603 INFO Fetch successful Oct 13 05:35:29.657337 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 05:35:29.660483 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:35:29.807157 locksmithd[2674]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:35:30.027684 sshd_keygen[2623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:35:30.051810 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:35:30.056186 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:35:30.061314 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Oct 13 05:35:30.081247 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:35:30.081461 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:35:30.086429 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:35:30.104279 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Oct 13 05:35:30.127381 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:35:30.134762 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:35:30.138579 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:35:30.140813 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:35:30.188016 containerd[2611]: time="2025-10-13T05:35:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:35:30.188794 containerd[2611]: time="2025-10-13T05:35:30.188762674Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:35:30.200362 containerd[2611]: time="2025-10-13T05:35:30.200331473Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.152µs" Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200431139Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200453216Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200588803Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200602135Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200623289Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200669099Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200680027Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200863859Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200876185Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200885654Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200894101Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201369 containerd[2611]: time="2025-10-13T05:35:30.200944881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201595 containerd[2611]: time="2025-10-13T05:35:30.201096799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201595 containerd[2611]: time="2025-10-13T05:35:30.201116971Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.201595 containerd[2611]: time="2025-10-13T05:35:30.201127362Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:35:30.201595 containerd[2611]: time="2025-10-13T05:35:30.201150024Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:35:30.203256 containerd[2611]: time="2025-10-13T05:35:30.203064012Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:35:30.203256 containerd[2611]: time="2025-10-13T05:35:30.203138944Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:35:30.272514 containerd[2611]: time="2025-10-13T05:35:30.272479317Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:35:30.272578 containerd[2611]: time="2025-10-13T05:35:30.272539777Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:35:30.272578 containerd[2611]: time="2025-10-13T05:35:30.272555624Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:35:30.272578 containerd[2611]: time="2025-10-13T05:35:30.272575150Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:35:30.272649 containerd[2611]: time="2025-10-13T05:35:30.272587982Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:35:30.272649 containerd[2611]: time="2025-10-13T05:35:30.272598332Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:35:30.272649 containerd[2611]: time="2025-10-13T05:35:30.272633522Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:35:30.272649 containerd[2611]: time="2025-10-13T05:35:30.272645575Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:35:30.272729 containerd[2611]: time="2025-10-13T05:35:30.272657147Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:35:30.272729 containerd[2611]: time="2025-10-13T05:35:30.272673289Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:35:30.272729 containerd[2611]: time="2025-10-13T05:35:30.272682666Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:35:30.272729 containerd[2611]: time="2025-10-13T05:35:30.272695660Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:35:30.272857 containerd[2611]: time="2025-10-13T05:35:30.272806914Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:35:30.272857 containerd[2611]: time="2025-10-13T05:35:30.272825607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:35:30.272857 containerd[2611]: time="2025-10-13T05:35:30.272838537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:35:30.272919 containerd[2611]: time="2025-10-13T05:35:30.272878664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:35:30.272919 containerd[2611]: time="2025-10-13T05:35:30.272890904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:35:30.272919 containerd[2611]: time="2025-10-13T05:35:30.272901008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:35:30.272919 containerd[2611]: time="2025-10-13T05:35:30.272916810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:35:30.273004 containerd[2611]: time="2025-10-13T05:35:30.272973948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:35:30.273004 containerd[2611]: time="2025-10-13T05:35:30.272986125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:35:30.273004 containerd[2611]: time="2025-10-13T05:35:30.272996550Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:35:30.273057 containerd[2611]: time="2025-10-13T05:35:30.273007475Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:35:30.273106 containerd[2611]: time="2025-10-13T05:35:30.273082789Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:35:30.273131 containerd[2611]: time="2025-10-13T05:35:30.273104268Z" level=info msg="Start snapshots syncer" Oct 13 05:35:30.273150 containerd[2611]: time="2025-10-13T05:35:30.273137518Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:35:30.273871 containerd[2611]: time="2025-10-13T05:35:30.273409145Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:35:30.273871 containerd[2611]: time="2025-10-13T05:35:30.273469843Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273539196Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273670709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273691513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273702277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273712515Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273725208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273735753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273747075Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273770986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273784268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273794443Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273824547Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273837461Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:35:30.274047 containerd[2611]: time="2025-10-13T05:35:30.273848435Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273857161Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273864593Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273874637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273887838Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273902934Z" level=info msg="runtime interface created" Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273907927Z" level=info msg="created NRI interface" Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273915631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273926778Z" level=info msg="Connect containerd service" Oct 13 05:35:30.274340 containerd[2611]: time="2025-10-13T05:35:30.273953144Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:35:30.275513 containerd[2611]: time="2025-10-13T05:35:30.275486052Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:35:30.836334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:35:30.844532 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:35:31.302725 containerd[2611]: time="2025-10-13T05:35:31.302618286Z" level=info msg="Start subscribing containerd event" Oct 13 05:35:31.302725 containerd[2611]: time="2025-10-13T05:35:31.302671676Z" level=info msg="Start recovering state" Oct 13 05:35:31.303080 containerd[2611]: time="2025-10-13T05:35:31.302772740Z" level=info msg="Start event monitor" Oct 13 05:35:31.303080 containerd[2611]: time="2025-10-13T05:35:31.302783955Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:35:31.303080 containerd[2611]: time="2025-10-13T05:35:31.302790505Z" level=info msg="Start streaming server" Oct 13 05:35:31.303080 containerd[2611]: time="2025-10-13T05:35:31.302802579Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:35:31.303080 containerd[2611]: time="2025-10-13T05:35:31.302810217Z" level=info msg="runtime interface starting up..." Oct 13 05:35:31.303080 containerd[2611]: time="2025-10-13T05:35:31.302816160Z" level=info msg="starting plugins..." Oct 13 05:35:31.303080 containerd[2611]: time="2025-10-13T05:35:31.302827839Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:35:31.303443 containerd[2611]: time="2025-10-13T05:35:31.303355532Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:35:31.303443 containerd[2611]: time="2025-10-13T05:35:31.303408335Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:35:31.303577 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:35:31.306447 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:35:31.306914 containerd[2611]: time="2025-10-13T05:35:31.306747844Z" level=info msg="containerd successfully booted in 1.119109s" Oct 13 05:35:31.309586 systemd[1]: Startup finished in 5.025s (kernel) + 17.056s (initrd) + 26.457s (userspace) = 48.539s. Oct 13 05:35:32.372066 kubelet[2730]: E1013 05:35:32.372018 2730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:35:32.374003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:35:32.374113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:35:32.374734 systemd[1]: kubelet.service: Consumed 974ms CPU time, 267.4M memory peak. Oct 13 05:35:32.779707 login[2714]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Oct 13 05:35:32.781100 login[2715]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:35:32.786549 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:35:32.787754 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:35:32.795514 systemd-logind[2582]: New session 1 of user core. Oct 13 05:35:32.820910 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:35:32.823820 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:35:32.883941 (systemd)[2747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:35:32.885836 systemd-logind[2582]: New session c1 of user core. Oct 13 05:35:33.524829 systemd[2747]: Queued start job for default target default.target. Oct 13 05:35:33.532039 systemd[2747]: Created slice app.slice - User Application Slice. Oct 13 05:35:33.532067 systemd[2747]: Reached target paths.target - Paths. Oct 13 05:35:33.532230 systemd[2747]: Reached target timers.target - Timers. Oct 13 05:35:33.533143 systemd[2747]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:35:33.541898 systemd[2747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:35:33.541951 systemd[2747]: Reached target sockets.target - Sockets. Oct 13 05:35:33.541990 systemd[2747]: Reached target basic.target - Basic System. Oct 13 05:35:33.542020 systemd[2747]: Reached target default.target - Main User Target. Oct 13 05:35:33.542043 systemd[2747]: Startup finished in 649ms. Oct 13 05:35:33.542236 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:35:33.543826 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:35:33.581418 waagent[2712]: 2025-10-13T05:35:33.581349Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Oct 13 05:35:33.583060 waagent[2712]: 2025-10-13T05:35:33.582053Z INFO Daemon Daemon OS: flatcar 4487.0.0 Oct 13 05:35:33.583957 waagent[2712]: 2025-10-13T05:35:33.583924Z INFO Daemon Daemon Python: 3.11.13 Oct 13 05:35:33.585239 waagent[2712]: 2025-10-13T05:35:33.585188Z INFO Daemon Daemon Run daemon Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.585909Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4487.0.0' Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.586195Z INFO Daemon Daemon Using waagent for provisioning Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.586877Z INFO Daemon Daemon Activate resource disk Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.587112Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.588882Z INFO Daemon Daemon Found device: None Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.589309Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.589376Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.589955Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 05:35:33.590521 waagent[2712]: 2025-10-13T05:35:33.590059Z INFO Daemon Daemon Running default provisioning handler Oct 13 05:35:33.597218 waagent[2712]: 2025-10-13T05:35:33.597132Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Oct 13 05:35:33.598351 waagent[2712]: 2025-10-13T05:35:33.597825Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 13 05:35:33.598351 waagent[2712]: 2025-10-13T05:35:33.598007Z INFO Daemon Daemon cloud-init is enabled: False Oct 13 05:35:33.598351 waagent[2712]: 2025-10-13T05:35:33.598060Z INFO Daemon Daemon Copying ovf-env.xml Oct 13 05:35:33.779948 login[2714]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:35:33.783995 systemd-logind[2582]: New session 2 of user core. Oct 13 05:35:33.791321 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:35:33.843434 waagent[2712]: 2025-10-13T05:35:33.843381Z INFO Daemon Daemon Successfully mounted dvd Oct 13 05:35:33.899873 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 13 05:35:33.901808 waagent[2712]: 2025-10-13T05:35:33.901757Z INFO Daemon Daemon Detect protocol endpoint Oct 13 05:35:33.903217 waagent[2712]: 2025-10-13T05:35:33.903166Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 05:35:33.904656 waagent[2712]: 2025-10-13T05:35:33.904620Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 13 05:35:33.904869 waagent[2712]: 2025-10-13T05:35:33.904844Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 13 05:35:33.908402 waagent[2712]: 2025-10-13T05:35:33.908345Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 13 05:35:33.909687 waagent[2712]: 2025-10-13T05:35:33.909655Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 13 05:35:33.943626 waagent[2712]: 2025-10-13T05:35:33.943582Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 13 05:35:33.945191 waagent[2712]: 2025-10-13T05:35:33.945166Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 13 05:35:33.946114 waagent[2712]: 2025-10-13T05:35:33.945833Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 13 05:35:34.058580 waagent[2712]: 2025-10-13T05:35:34.058452Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 13 05:35:34.060305 waagent[2712]: 2025-10-13T05:35:34.060258Z INFO Daemon Daemon Forcing an update of the goal state. Oct 13 05:35:34.075378 waagent[2712]: 2025-10-13T05:35:34.075340Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 05:35:34.098038 waagent[2712]: 2025-10-13T05:35:34.098002Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Oct 13 05:35:34.099119 waagent[2712]: 2025-10-13T05:35:34.098926Z INFO Daemon Oct 13 05:35:34.099119 waagent[2712]: 2025-10-13T05:35:34.099121Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: fb6ccb2b-0196-4aec-8012-9fc0a64ab23e eTag: 12389815929412194318 source: Fabric] Oct 13 05:35:34.099119 waagent[2712]: 2025-10-13T05:35:34.099408Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Oct 13 05:35:34.099119 waagent[2712]: 2025-10-13T05:35:34.099735Z INFO Daemon Oct 13 05:35:34.099119 waagent[2712]: 2025-10-13T05:35:34.099822Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Oct 13 05:35:34.104904 waagent[2712]: 2025-10-13T05:35:34.104504Z INFO Daemon Daemon Downloading artifacts profile blob Oct 13 05:35:34.171885 waagent[2712]: 2025-10-13T05:35:34.171834Z INFO Daemon Downloaded certificate {'thumbprint': '0F3596C231C10AC51F4ED6907FC0966516A441CE', 'hasPrivateKey': True} Oct 13 05:35:34.174488 waagent[2712]: 2025-10-13T05:35:34.174450Z INFO Daemon Fetch goal state completed Oct 13 05:35:34.186895 waagent[2712]: 2025-10-13T05:35:34.186843Z INFO Daemon Daemon Starting provisioning Oct 13 05:35:34.188140 waagent[2712]: 2025-10-13T05:35:34.188047Z INFO Daemon Daemon Handle ovf-env.xml. Oct 13 05:35:34.188552 waagent[2712]: 2025-10-13T05:35:34.188526Z INFO Daemon Daemon Set hostname [ci-4487.0.0-a-8f52350bac] Oct 13 05:35:34.301785 waagent[2712]: 2025-10-13T05:35:34.301736Z INFO Daemon Daemon Publish hostname [ci-4487.0.0-a-8f52350bac] Oct 13 05:35:34.303460 waagent[2712]: 2025-10-13T05:35:34.303421Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 13 05:35:34.305081 waagent[2712]: 2025-10-13T05:35:34.305050Z INFO Daemon Daemon Primary interface is [eth0] Oct 13 05:35:34.312878 systemd-networkd[2245]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:35:34.312886 systemd-networkd[2245]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:35:34.312946 systemd-networkd[2245]: eth0: DHCP lease lost Oct 13 05:35:34.330494 waagent[2712]: 2025-10-13T05:35:34.330445Z INFO Daemon Daemon Create user account if not exists Oct 13 05:35:34.331729 waagent[2712]: 2025-10-13T05:35:34.331153Z INFO Daemon Daemon User core already exists, skip useradd Oct 13 05:35:34.331729 waagent[2712]: 2025-10-13T05:35:34.331445Z INFO Daemon Daemon Configure sudoer Oct 13 05:35:34.339234 systemd-networkd[2245]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 13 05:35:34.374453 waagent[2712]: 2025-10-13T05:35:34.374397Z INFO Daemon Daemon Configure sshd Oct 13 05:35:34.380581 waagent[2712]: 2025-10-13T05:35:34.380537Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Oct 13 05:35:34.381768 waagent[2712]: 2025-10-13T05:35:34.381045Z INFO Daemon Daemon Deploy ssh public key. Oct 13 05:35:35.494863 waagent[2712]: 2025-10-13T05:35:35.494793Z INFO Daemon Daemon Provisioning complete Oct 13 05:35:35.509364 waagent[2712]: 2025-10-13T05:35:35.509327Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 13 05:35:35.510817 waagent[2712]: 2025-10-13T05:35:35.510784Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 13 05:35:35.512750 waagent[2712]: 2025-10-13T05:35:35.511620Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Oct 13 05:35:35.620233 waagent[2798]: 2025-10-13T05:35:35.620164Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Oct 13 05:35:35.620519 waagent[2798]: 2025-10-13T05:35:35.620277Z INFO ExtHandler ExtHandler OS: flatcar 4487.0.0 Oct 13 05:35:35.620519 waagent[2798]: 2025-10-13T05:35:35.620323Z INFO ExtHandler ExtHandler Python: 3.11.13 Oct 13 05:35:35.620519 waagent[2798]: 2025-10-13T05:35:35.620365Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Oct 13 05:35:35.671530 waagent[2798]: 2025-10-13T05:35:35.671480Z INFO ExtHandler ExtHandler Distro: flatcar-4487.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Oct 13 05:35:35.671659 waagent[2798]: 2025-10-13T05:35:35.671631Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:35:35.671709 waagent[2798]: 2025-10-13T05:35:35.671688Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:35:35.683887 waagent[2798]: 2025-10-13T05:35:35.683827Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 05:35:35.689788 waagent[2798]: 2025-10-13T05:35:35.689756Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Oct 13 05:35:35.690137 waagent[2798]: 2025-10-13T05:35:35.690106Z INFO ExtHandler Oct 13 05:35:35.690184 waagent[2798]: 2025-10-13T05:35:35.690160Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7078b21d-5fa1-4a4c-a4fc-4e2d12628e01 eTag: 12389815929412194318 source: Fabric] Oct 13 05:35:35.690420 waagent[2798]: 2025-10-13T05:35:35.690392Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 13 05:35:35.690770 waagent[2798]: 2025-10-13T05:35:35.690741Z INFO ExtHandler Oct 13 05:35:35.690810 waagent[2798]: 2025-10-13T05:35:35.690784Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 13 05:35:35.695838 waagent[2798]: 2025-10-13T05:35:35.695807Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 13 05:35:35.767536 waagent[2798]: 2025-10-13T05:35:35.767453Z INFO ExtHandler Downloaded certificate {'thumbprint': '0F3596C231C10AC51F4ED6907FC0966516A441CE', 'hasPrivateKey': True} Oct 13 05:35:35.767850 waagent[2798]: 2025-10-13T05:35:35.767820Z INFO ExtHandler Fetch goal state completed Oct 13 05:35:35.781707 waagent[2798]: 2025-10-13T05:35:35.781659Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Oct 13 05:35:35.785654 waagent[2798]: 2025-10-13T05:35:35.785614Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2798 Oct 13 05:35:35.785771 waagent[2798]: 2025-10-13T05:35:35.785747Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Oct 13 05:35:35.786003 waagent[2798]: 2025-10-13T05:35:35.785979Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Oct 13 05:35:35.787087 waagent[2798]: 2025-10-13T05:35:35.787042Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 13 05:35:35.787449 waagent[2798]: 2025-10-13T05:35:35.787420Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Oct 13 05:35:35.787556 waagent[2798]: 2025-10-13T05:35:35.787534Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Oct 13 05:35:35.787937 waagent[2798]: 2025-10-13T05:35:35.787912Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 13 05:35:35.868266 waagent[2798]: 2025-10-13T05:35:35.868232Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 13 05:35:35.868422 waagent[2798]: 2025-10-13T05:35:35.868397Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 13 05:35:35.874186 waagent[2798]: 2025-10-13T05:35:35.873813Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 13 05:35:35.879107 systemd[1]: Reload requested from client PID 2813 ('systemctl') (unit waagent.service)... Oct 13 05:35:35.879123 systemd[1]: Reloading... Oct 13 05:35:35.956332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#178 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:35:35.961229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#181 cmd 0xa1 status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:35:35.967281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#182 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:35:35.967533 zram_generator::config[2854]: No configuration found. Oct 13 05:35:35.976268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#184 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:35:35.984227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:35:35.993181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:35:36.002226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:35:36.156513 systemd[1]: Reloading finished in 277 ms. Oct 13 05:35:36.168233 waagent[2798]: 2025-10-13T05:35:36.166798Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Oct 13 05:35:36.168233 waagent[2798]: 2025-10-13T05:35:36.166941Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Oct 13 05:35:36.868088 waagent[2798]: 2025-10-13T05:35:36.868012Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Oct 13 05:35:36.868451 waagent[2798]: 2025-10-13T05:35:36.868391Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Oct 13 05:35:36.869175 waagent[2798]: 2025-10-13T05:35:36.869137Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:35:36.869353 waagent[2798]: 2025-10-13T05:35:36.869174Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 13 05:35:36.869483 waagent[2798]: 2025-10-13T05:35:36.869448Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:35:36.869660 waagent[2798]: 2025-10-13T05:35:36.869638Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 13 05:35:36.869872 waagent[2798]: 2025-10-13T05:35:36.869838Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 13 05:35:36.870097 waagent[2798]: 2025-10-13T05:35:36.870052Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 13 05:35:36.870191 waagent[2798]: 2025-10-13T05:35:36.870102Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 13 05:35:36.870268 waagent[2798]: 2025-10-13T05:35:36.870242Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:35:36.870533 waagent[2798]: 2025-10-13T05:35:36.870500Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 13 05:35:36.870533 waagent[2798]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 13 05:35:36.870533 waagent[2798]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Oct 13 05:35:36.870533 waagent[2798]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 13 05:35:36.870533 waagent[2798]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:35:36.870533 waagent[2798]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:35:36.870533 waagent[2798]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:35:36.870751 waagent[2798]: 2025-10-13T05:35:36.870619Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:35:36.870779 waagent[2798]: 2025-10-13T05:35:36.870751Z INFO EnvHandler ExtHandler Configure routes Oct 13 05:35:36.870805 waagent[2798]: 2025-10-13T05:35:36.870793Z INFO EnvHandler ExtHandler Gateway:None Oct 13 05:35:36.870830 waagent[2798]: 2025-10-13T05:35:36.870821Z INFO EnvHandler ExtHandler Routes:None Oct 13 05:35:36.871001 waagent[2798]: 2025-10-13T05:35:36.870975Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 13 05:35:36.871160 waagent[2798]: 2025-10-13T05:35:36.871117Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 13 05:35:36.871679 waagent[2798]: 2025-10-13T05:35:36.871651Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 13 05:35:36.877033 waagent[2798]: 2025-10-13T05:35:36.876997Z INFO ExtHandler ExtHandler Oct 13 05:35:36.877112 waagent[2798]: 2025-10-13T05:35:36.877059Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b9fef885-65d0-476b-a1f1-780e3afd78ca correlation e8a22c32-3945-42b8-94e8-4921c0012096 created: 2025-10-13T05:34:08.603699Z] Oct 13 05:35:36.877384 waagent[2798]: 2025-10-13T05:35:36.877361Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 13 05:35:36.877772 waagent[2798]: 2025-10-13T05:35:36.877749Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Oct 13 05:35:36.932528 waagent[2798]: 2025-10-13T05:35:36.932043Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Oct 13 05:35:36.932528 waagent[2798]: Try `iptables -h' or 'iptables --help' for more information.) Oct 13 05:35:36.932528 waagent[2798]: 2025-10-13T05:35:36.932452Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AA5BC525-23FB-4F3A-99E8-7928F59A89C6;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Oct 13 05:35:37.024965 waagent[2798]: 2025-10-13T05:35:37.024913Z INFO MonitorHandler ExtHandler Network interfaces: Oct 13 05:35:37.024965 waagent[2798]: Executing ['ip', '-a', '-o', 'link']: Oct 13 05:35:37.024965 waagent[2798]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 13 05:35:37.024965 waagent[2798]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:36:0e:62 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7c1e52360e62 Oct 13 05:35:37.024965 waagent[2798]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:36:0e:62 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Oct 13 05:35:37.024965 waagent[2798]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 13 05:35:37.024965 waagent[2798]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 13 05:35:37.024965 waagent[2798]: 2: eth0 inet 10.200.8.43/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 13 05:35:37.024965 waagent[2798]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 13 05:35:37.024965 waagent[2798]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Oct 13 05:35:37.024965 waagent[2798]: 2: eth0 inet6 fe80::7e1e:52ff:fe36:e62/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 13 05:35:37.148441 waagent[2798]: 2025-10-13T05:35:37.148351Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Oct 13 05:35:37.148441 waagent[2798]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:37.148441 waagent[2798]: pkts bytes target prot opt in out source destination Oct 13 05:35:37.148441 waagent[2798]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:37.148441 waagent[2798]: pkts bytes target prot opt in out source destination Oct 13 05:35:37.148441 waagent[2798]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:37.148441 waagent[2798]: pkts bytes target prot opt in out source destination Oct 13 05:35:37.148441 waagent[2798]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 05:35:37.148441 waagent[2798]: 4 406 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 05:35:37.148441 waagent[2798]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 05:35:37.151029 waagent[2798]: 2025-10-13T05:35:37.150978Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 13 05:35:37.151029 waagent[2798]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:37.151029 waagent[2798]: pkts bytes target prot opt in out source destination Oct 13 05:35:37.151029 waagent[2798]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:37.151029 waagent[2798]: pkts bytes target prot opt in out source destination Oct 13 05:35:37.151029 waagent[2798]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:37.151029 waagent[2798]: pkts bytes target prot opt in out source destination Oct 13 05:35:37.151029 waagent[2798]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 05:35:37.151029 waagent[2798]: 4 406 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 05:35:37.151029 waagent[2798]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 05:35:42.418674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:35:42.420106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:35:43.341274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:35:43.350406 (kubelet)[2959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:35:43.386536 kubelet[2959]: E1013 05:35:43.386501 2959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:35:43.389640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:35:43.389770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:35:43.390069 systemd[1]: kubelet.service: Consumed 135ms CPU time, 109M memory peak. Oct 13 05:35:52.683698 chronyd[2564]: Selected source PHC0 Oct 13 05:35:53.418669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:35:53.420080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:35:53.894059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:35:53.896948 (kubelet)[2974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:35:53.935181 kubelet[2974]: E1013 05:35:53.935148 2974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:35:53.936818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:35:53.936929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:35:53.937235 systemd[1]: kubelet.service: Consumed 130ms CPU time, 109M memory peak. Oct 13 05:35:54.828825 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:35:54.830054 systemd[1]: Started sshd@0-10.200.8.43:22-10.200.16.10:33648.service - OpenSSH per-connection server daemon (10.200.16.10:33648). Oct 13 05:35:55.727917 sshd[2981]: Accepted publickey for core from 10.200.16.10 port 33648 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:35:55.729041 sshd-session[2981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:35:55.733310 systemd-logind[2582]: New session 3 of user core. Oct 13 05:35:55.738338 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:35:56.286875 systemd[1]: Started sshd@1-10.200.8.43:22-10.200.16.10:33658.service - OpenSSH per-connection server daemon (10.200.16.10:33658). Oct 13 05:35:56.928737 sshd[2987]: Accepted publickey for core from 10.200.16.10 port 33658 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:35:56.929893 sshd-session[2987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:35:56.934231 systemd-logind[2582]: New session 4 of user core. Oct 13 05:35:56.940365 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:35:57.380370 sshd[2990]: Connection closed by 10.200.16.10 port 33658 Oct 13 05:35:57.381075 sshd-session[2987]: pam_unix(sshd:session): session closed for user core Oct 13 05:35:57.383832 systemd[1]: sshd@1-10.200.8.43:22-10.200.16.10:33658.service: Deactivated successfully. Oct 13 05:35:57.385320 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:35:57.386798 systemd-logind[2582]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:35:57.387508 systemd-logind[2582]: Removed session 4. Oct 13 05:35:57.497753 systemd[1]: Started sshd@2-10.200.8.43:22-10.200.16.10:33662.service - OpenSSH per-connection server daemon (10.200.16.10:33662). Oct 13 05:35:58.145785 sshd[2996]: Accepted publickey for core from 10.200.16.10 port 33662 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:35:58.146924 sshd-session[2996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:35:58.151287 systemd-logind[2582]: New session 5 of user core. Oct 13 05:35:58.161351 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:35:58.592387 sshd[2999]: Connection closed by 10.200.16.10 port 33662 Oct 13 05:35:58.592935 sshd-session[2996]: pam_unix(sshd:session): session closed for user core Oct 13 05:35:58.595850 systemd[1]: sshd@2-10.200.8.43:22-10.200.16.10:33662.service: Deactivated successfully. Oct 13 05:35:58.597368 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:35:58.598873 systemd-logind[2582]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:35:58.599640 systemd-logind[2582]: Removed session 5. Oct 13 05:35:58.708684 systemd[1]: Started sshd@3-10.200.8.43:22-10.200.16.10:33672.service - OpenSSH per-connection server daemon (10.200.16.10:33672). Oct 13 05:35:59.358265 sshd[3005]: Accepted publickey for core from 10.200.16.10 port 33672 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:35:59.359345 sshd-session[3005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:35:59.363906 systemd-logind[2582]: New session 6 of user core. Oct 13 05:35:59.369359 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:35:59.811699 sshd[3008]: Connection closed by 10.200.16.10 port 33672 Oct 13 05:35:59.812198 sshd-session[3005]: pam_unix(sshd:session): session closed for user core Oct 13 05:35:59.815299 systemd[1]: sshd@3-10.200.8.43:22-10.200.16.10:33672.service: Deactivated successfully. Oct 13 05:35:59.816790 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:35:59.817493 systemd-logind[2582]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:35:59.818447 systemd-logind[2582]: Removed session 6. Oct 13 05:35:59.923616 systemd[1]: Started sshd@4-10.200.8.43:22-10.200.16.10:33678.service - OpenSSH per-connection server daemon (10.200.16.10:33678). Oct 13 05:36:00.564069 sshd[3014]: Accepted publickey for core from 10.200.16.10 port 33678 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:36:00.565122 sshd-session[3014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:36:00.569281 systemd-logind[2582]: New session 7 of user core. Oct 13 05:36:00.575335 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:36:01.077073 sudo[3018]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:36:01.077313 sudo[3018]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:36:01.109258 sudo[3018]: pam_unix(sudo:session): session closed for user root Oct 13 05:36:01.214041 sshd[3017]: Connection closed by 10.200.16.10 port 33678 Oct 13 05:36:01.214651 sshd-session[3014]: pam_unix(sshd:session): session closed for user core Oct 13 05:36:01.217698 systemd[1]: sshd@4-10.200.8.43:22-10.200.16.10:33678.service: Deactivated successfully. Oct 13 05:36:01.219339 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:36:01.220944 systemd-logind[2582]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:36:01.221862 systemd-logind[2582]: Removed session 7. Oct 13 05:36:01.327029 systemd[1]: Started sshd@5-10.200.8.43:22-10.200.16.10:39690.service - OpenSSH per-connection server daemon (10.200.16.10:39690). Oct 13 05:36:01.970002 sshd[3024]: Accepted publickey for core from 10.200.16.10 port 39690 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:36:01.971112 sshd-session[3024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:36:01.975134 systemd-logind[2582]: New session 8 of user core. Oct 13 05:36:01.982334 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:36:02.320400 sudo[3029]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:36:02.320790 sudo[3029]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:36:02.358407 sudo[3029]: pam_unix(sudo:session): session closed for user root Oct 13 05:36:02.362908 sudo[3028]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:36:02.363117 sudo[3028]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:36:02.370863 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:36:02.401902 augenrules[3051]: No rules Oct 13 05:36:02.402808 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:36:02.402972 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:36:02.403830 sudo[3028]: pam_unix(sudo:session): session closed for user root Oct 13 05:36:02.508683 sshd[3027]: Connection closed by 10.200.16.10 port 39690 Oct 13 05:36:02.509091 sshd-session[3024]: pam_unix(sshd:session): session closed for user core Oct 13 05:36:02.511675 systemd[1]: sshd@5-10.200.8.43:22-10.200.16.10:39690.service: Deactivated successfully. Oct 13 05:36:02.513134 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:36:02.514322 systemd-logind[2582]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:36:02.515510 systemd-logind[2582]: Removed session 8. Oct 13 05:36:02.641632 systemd[1]: Started sshd@6-10.200.8.43:22-10.200.16.10:39702.service - OpenSSH per-connection server daemon (10.200.16.10:39702). Oct 13 05:36:03.284300 sshd[3060]: Accepted publickey for core from 10.200.16.10 port 39702 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:36:03.285380 sshd-session[3060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:36:03.289290 systemd-logind[2582]: New session 9 of user core. Oct 13 05:36:03.296332 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:36:03.634854 sudo[3064]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:36:03.635066 sudo[3064]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:36:04.141537 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Oct 13 05:36:04.168561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 05:36:04.169989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:05.848383 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:36:05.860490 (dockerd)[3086]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:36:06.963120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:06.976392 (kubelet)[3096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:36:07.015645 kubelet[3096]: E1013 05:36:07.015600 3096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:36:07.017130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:36:07.017262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:36:07.017635 systemd[1]: kubelet.service: Consumed 130ms CPU time, 110M memory peak. Oct 13 05:36:07.247414 dockerd[3086]: time="2025-10-13T05:36:07.247308522Z" level=info msg="Starting up" Oct 13 05:36:07.248056 dockerd[3086]: time="2025-10-13T05:36:07.248027715Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:36:07.257518 dockerd[3086]: time="2025-10-13T05:36:07.257453776Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:36:07.503546 dockerd[3086]: time="2025-10-13T05:36:07.503340518Z" level=info msg="Loading containers: start." Oct 13 05:36:07.680376 kernel: Initializing XFRM netlink socket Oct 13 05:36:08.530593 systemd-networkd[2245]: docker0: Link UP Oct 13 05:36:08.550410 dockerd[3086]: time="2025-10-13T05:36:08.550371006Z" level=info msg="Loading containers: done." Oct 13 05:36:08.607004 dockerd[3086]: time="2025-10-13T05:36:08.606959586Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:36:08.607148 dockerd[3086]: time="2025-10-13T05:36:08.607048758Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:36:08.607148 dockerd[3086]: time="2025-10-13T05:36:08.607116300Z" level=info msg="Initializing buildkit" Oct 13 05:36:08.661707 dockerd[3086]: time="2025-10-13T05:36:08.661672136Z" level=info msg="Completed buildkit initialization" Oct 13 05:36:08.667950 dockerd[3086]: time="2025-10-13T05:36:08.667907824Z" level=info msg="Daemon has completed initialization" Oct 13 05:36:08.668151 dockerd[3086]: time="2025-10-13T05:36:08.668118357Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:36:08.668329 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:36:10.035918 containerd[2611]: time="2025-10-13T05:36:10.035871482Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 13 05:36:10.825943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2392300664.mount: Deactivated successfully. Oct 13 05:36:12.267899 containerd[2611]: time="2025-10-13T05:36:12.267851967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:12.270842 containerd[2611]: time="2025-10-13T05:36:12.270647576Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Oct 13 05:36:12.276290 containerd[2611]: time="2025-10-13T05:36:12.276259336Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:12.280243 containerd[2611]: time="2025-10-13T05:36:12.280214572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:12.280938 containerd[2611]: time="2025-10-13T05:36:12.280914316Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.244994223s" Oct 13 05:36:12.281016 containerd[2611]: time="2025-10-13T05:36:12.281005541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 13 05:36:12.281692 containerd[2611]: time="2025-10-13T05:36:12.281667376Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 13 05:36:13.658508 containerd[2611]: time="2025-10-13T05:36:13.658459635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:13.661513 containerd[2611]: time="2025-10-13T05:36:13.661472376Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Oct 13 05:36:13.664964 containerd[2611]: time="2025-10-13T05:36:13.664918804Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:13.669958 containerd[2611]: time="2025-10-13T05:36:13.669912923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:13.670748 containerd[2611]: time="2025-10-13T05:36:13.670532397Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.388838645s" Oct 13 05:36:13.670748 containerd[2611]: time="2025-10-13T05:36:13.670564403Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 13 05:36:13.671288 containerd[2611]: time="2025-10-13T05:36:13.671266923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 13 05:36:14.771120 update_engine[2586]: I20251013 05:36:14.771021 2586 update_attempter.cc:509] Updating boot flags... Oct 13 05:36:14.836038 containerd[2611]: time="2025-10-13T05:36:14.835376963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:14.854830 containerd[2611]: time="2025-10-13T05:36:14.854795357Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Oct 13 05:36:14.864803 containerd[2611]: time="2025-10-13T05:36:14.859318315Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:14.874494 containerd[2611]: time="2025-10-13T05:36:14.872246024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:14.874494 containerd[2611]: time="2025-10-13T05:36:14.873249795Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.201956531s" Oct 13 05:36:14.874494 containerd[2611]: time="2025-10-13T05:36:14.873280396Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 13 05:36:14.874494 containerd[2611]: time="2025-10-13T05:36:14.874488693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 13 05:36:15.934616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915509579.mount: Deactivated successfully. Oct 13 05:36:16.340760 containerd[2611]: time="2025-10-13T05:36:16.340640303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:16.343532 containerd[2611]: time="2025-10-13T05:36:16.343507179Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Oct 13 05:36:16.350060 containerd[2611]: time="2025-10-13T05:36:16.350018999Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:16.353184 containerd[2611]: time="2025-10-13T05:36:16.353143560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:16.353931 containerd[2611]: time="2025-10-13T05:36:16.353546224Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.479033429s" Oct 13 05:36:16.353931 containerd[2611]: time="2025-10-13T05:36:16.353793409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 13 05:36:16.354315 containerd[2611]: time="2025-10-13T05:36:16.354286437Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 13 05:36:16.965237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509409079.mount: Deactivated successfully. Oct 13 05:36:17.168600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 13 05:36:17.170353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:17.652236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:17.661429 (kubelet)[3436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:36:17.692237 kubelet[3436]: E1013 05:36:17.692176 3436 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:36:17.693749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:36:17.693878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:36:17.694250 systemd[1]: kubelet.service: Consumed 136ms CPU time, 108M memory peak. Oct 13 05:36:18.770358 containerd[2611]: time="2025-10-13T05:36:18.770311915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:18.780223 containerd[2611]: time="2025-10-13T05:36:18.780084196Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Oct 13 05:36:18.784390 containerd[2611]: time="2025-10-13T05:36:18.784135439Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:18.788005 containerd[2611]: time="2025-10-13T05:36:18.787976299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:18.788761 containerd[2611]: time="2025-10-13T05:36:18.788735362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.434408296s" Oct 13 05:36:18.788813 containerd[2611]: time="2025-10-13T05:36:18.788770965Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 13 05:36:18.789606 containerd[2611]: time="2025-10-13T05:36:18.789584201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 05:36:19.311514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092195719.mount: Deactivated successfully. Oct 13 05:36:19.332077 containerd[2611]: time="2025-10-13T05:36:19.332033748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:36:19.334197 containerd[2611]: time="2025-10-13T05:36:19.334165444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 13 05:36:19.341295 containerd[2611]: time="2025-10-13T05:36:19.341256622Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:36:19.345081 containerd[2611]: time="2025-10-13T05:36:19.345026843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:36:19.345869 containerd[2611]: time="2025-10-13T05:36:19.345507822Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 555.89741ms" Oct 13 05:36:19.345869 containerd[2611]: time="2025-10-13T05:36:19.345538660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 05:36:19.346187 containerd[2611]: time="2025-10-13T05:36:19.346157445Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 13 05:36:19.795438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854126479.mount: Deactivated successfully. Oct 13 05:36:21.759178 containerd[2611]: time="2025-10-13T05:36:21.759128309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:21.762416 containerd[2611]: time="2025-10-13T05:36:21.762379968Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Oct 13 05:36:21.765938 containerd[2611]: time="2025-10-13T05:36:21.765894417Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:21.772282 containerd[2611]: time="2025-10-13T05:36:21.772250466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:21.773123 containerd[2611]: time="2025-10-13T05:36:21.773095860Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.426908181s" Oct 13 05:36:21.773168 containerd[2611]: time="2025-10-13T05:36:21.773124099Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 13 05:36:24.810072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:24.810335 systemd[1]: kubelet.service: Consumed 136ms CPU time, 108M memory peak. Oct 13 05:36:24.812387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:24.836281 systemd[1]: Reload requested from client PID 3570 ('systemctl') (unit session-9.scope)... Oct 13 05:36:24.836301 systemd[1]: Reloading... Oct 13 05:36:24.931235 zram_generator::config[3621]: No configuration found. Oct 13 05:36:25.123212 systemd[1]: Reloading finished in 286 ms. Oct 13 05:36:25.214236 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:36:25.214311 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:36:25.214548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:25.214593 systemd[1]: kubelet.service: Consumed 72ms CPU time, 69.9M memory peak. Oct 13 05:36:25.216458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:25.751353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:25.759467 (kubelet)[3685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:36:25.792242 kubelet[3685]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:36:25.792242 kubelet[3685]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:36:25.792242 kubelet[3685]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:36:25.792523 kubelet[3685]: I1013 05:36:25.792298 3685 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:36:26.029277 kubelet[3685]: I1013 05:36:26.029166 3685 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:36:26.029277 kubelet[3685]: I1013 05:36:26.029195 3685 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:36:26.029533 kubelet[3685]: I1013 05:36:26.029513 3685 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:36:26.058583 kubelet[3685]: E1013 05:36:26.058186 3685 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:36:26.058583 kubelet[3685]: I1013 05:36:26.058364 3685 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:36:26.069178 kubelet[3685]: I1013 05:36:26.069153 3685 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:36:26.073159 kubelet[3685]: I1013 05:36:26.073138 3685 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:36:26.073351 kubelet[3685]: I1013 05:36:26.073328 3685 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:36:26.073524 kubelet[3685]: I1013 05:36:26.073348 3685 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-a-8f52350bac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:36:26.073634 kubelet[3685]: I1013 05:36:26.073526 3685 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:36:26.073634 kubelet[3685]: I1013 05:36:26.073536 3685 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:36:26.073691 kubelet[3685]: I1013 05:36:26.073639 3685 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:36:26.114042 kubelet[3685]: I1013 05:36:26.114017 3685 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:36:26.114042 kubelet[3685]: I1013 05:36:26.114048 3685 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:36:26.114157 kubelet[3685]: I1013 05:36:26.114078 3685 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:36:26.114157 kubelet[3685]: I1013 05:36:26.114092 3685 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:36:26.144983 kubelet[3685]: E1013 05:36:26.144759 3685 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-a-8f52350bac&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:36:26.145072 kubelet[3685]: E1013 05:36:26.145002 3685 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:36:26.145330 kubelet[3685]: I1013 05:36:26.145305 3685 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:36:26.145879 kubelet[3685]: I1013 05:36:26.145847 3685 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:36:26.147002 kubelet[3685]: W1013 05:36:26.146554 3685 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:36:26.150383 kubelet[3685]: I1013 05:36:26.150367 3685 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:36:26.150458 kubelet[3685]: I1013 05:36:26.150434 3685 server.go:1289] "Started kubelet" Oct 13 05:36:26.152832 kubelet[3685]: I1013 05:36:26.152805 3685 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:36:26.157030 kubelet[3685]: E1013 05:36:26.154918 3685 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-a-8f52350bac.186df640799cda63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-a-8f52350bac,UID:ci-4487.0.0-a-8f52350bac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-a-8f52350bac,},FirstTimestamp:2025-10-13 05:36:26.150394467 +0000 UTC m=+0.388070264,LastTimestamp:2025-10-13 05:36:26.150394467 +0000 UTC m=+0.388070264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-a-8f52350bac,}" Oct 13 05:36:26.158884 kubelet[3685]: I1013 05:36:26.157663 3685 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:36:26.158884 kubelet[3685]: I1013 05:36:26.158266 3685 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:36:26.158884 kubelet[3685]: I1013 05:36:26.158564 3685 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:36:26.161494 kubelet[3685]: I1013 05:36:26.161467 3685 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:36:26.162174 kubelet[3685]: E1013 05:36:26.161691 3685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:26.162358 kubelet[3685]: I1013 05:36:26.162319 3685 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:36:26.162586 kubelet[3685]: I1013 05:36:26.162576 3685 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:36:26.162819 kubelet[3685]: I1013 05:36:26.162806 3685 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:36:26.165381 kubelet[3685]: E1013 05:36:26.165354 3685 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-8f52350bac?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="200ms" Oct 13 05:36:26.165615 kubelet[3685]: I1013 05:36:26.165605 3685 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:36:26.165736 kubelet[3685]: I1013 05:36:26.165724 3685 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:36:26.167464 kubelet[3685]: I1013 05:36:26.167030 3685 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:36:26.167464 kubelet[3685]: I1013 05:36:26.167068 3685 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:36:26.167999 kubelet[3685]: E1013 05:36:26.167978 3685 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:36:26.168176 kubelet[3685]: E1013 05:36:26.168164 3685 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:36:26.168438 kubelet[3685]: I1013 05:36:26.168427 3685 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:36:26.193540 kubelet[3685]: I1013 05:36:26.193524 3685 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:36:26.193540 kubelet[3685]: I1013 05:36:26.193537 3685 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:36:26.193634 kubelet[3685]: I1013 05:36:26.193553 3685 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:36:26.220900 kubelet[3685]: I1013 05:36:26.220879 3685 policy_none.go:49] "None policy: Start" Oct 13 05:36:26.220900 kubelet[3685]: I1013 05:36:26.220897 3685 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:36:26.220983 kubelet[3685]: I1013 05:36:26.220907 3685 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:36:26.231369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:36:26.240712 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:36:26.244049 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:36:26.249766 kubelet[3685]: E1013 05:36:26.249746 3685 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:36:26.250008 kubelet[3685]: I1013 05:36:26.249994 3685 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:36:26.250048 kubelet[3685]: I1013 05:36:26.250007 3685 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:36:26.250607 kubelet[3685]: I1013 05:36:26.250544 3685 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:36:26.252402 kubelet[3685]: E1013 05:36:26.252388 3685 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:36:26.252610 kubelet[3685]: E1013 05:36:26.252599 3685 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:26.271226 kubelet[3685]: I1013 05:36:26.271185 3685 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:36:26.271309 kubelet[3685]: I1013 05:36:26.271287 3685 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:36:26.271309 kubelet[3685]: I1013 05:36:26.271305 3685 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:36:26.271362 kubelet[3685]: I1013 05:36:26.271312 3685 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:36:26.271362 kubelet[3685]: E1013 05:36:26.271347 3685 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 13 05:36:26.272292 kubelet[3685]: E1013 05:36:26.272191 3685 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:36:26.352078 kubelet[3685]: I1013 05:36:26.351995 3685 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.352877 kubelet[3685]: E1013 05:36:26.352839 3685 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.366341 kubelet[3685]: E1013 05:36:26.366316 3685 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-8f52350bac?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="400ms" Oct 13 05:36:26.400399 systemd[1]: Created slice kubepods-burstable-pod01f98fa0dfe171d27a48d43e7718f9f9.slice - libcontainer container kubepods-burstable-pod01f98fa0dfe171d27a48d43e7718f9f9.slice. Oct 13 05:36:26.405776 kubelet[3685]: E1013 05:36:26.405756 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.412261 systemd[1]: Created slice kubepods-burstable-pod58e44824f33d7be974371a504d1cdca5.slice - libcontainer container kubepods-burstable-pod58e44824f33d7be974371a504d1cdca5.slice. Oct 13 05:36:26.413826 kubelet[3685]: E1013 05:36:26.413800 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.434139 systemd[1]: Created slice kubepods-burstable-pod0925e57568a7e89c4e5e71b12faeae12.slice - libcontainer container kubepods-burstable-pod0925e57568a7e89c4e5e71b12faeae12.slice. Oct 13 05:36:26.435775 kubelet[3685]: E1013 05:36:26.435657 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469195 kubelet[3685]: I1013 05:36:26.469162 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469195 kubelet[3685]: I1013 05:36:26.469212 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0925e57568a7e89c4e5e71b12faeae12-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-a-8f52350bac\" (UID: \"0925e57568a7e89c4e5e71b12faeae12\") " pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469195 kubelet[3685]: I1013 05:36:26.469234 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01f98fa0dfe171d27a48d43e7718f9f9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" (UID: \"01f98fa0dfe171d27a48d43e7718f9f9\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469442 kubelet[3685]: I1013 05:36:26.469252 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469442 kubelet[3685]: I1013 05:36:26.469277 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469442 kubelet[3685]: I1013 05:36:26.469296 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01f98fa0dfe171d27a48d43e7718f9f9-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" (UID: \"01f98fa0dfe171d27a48d43e7718f9f9\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469442 kubelet[3685]: I1013 05:36:26.469316 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01f98fa0dfe171d27a48d43e7718f9f9-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" (UID: \"01f98fa0dfe171d27a48d43e7718f9f9\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469442 kubelet[3685]: I1013 05:36:26.469344 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.469532 kubelet[3685]: I1013 05:36:26.469371 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.554380 kubelet[3685]: I1013 05:36:26.554355 3685 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.554628 kubelet[3685]: E1013 05:36:26.554608 3685 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.706913 containerd[2611]: time="2025-10-13T05:36:26.706667471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-a-8f52350bac,Uid:01f98fa0dfe171d27a48d43e7718f9f9,Namespace:kube-system,Attempt:0,}" Oct 13 05:36:26.715128 containerd[2611]: time="2025-10-13T05:36:26.715097403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-a-8f52350bac,Uid:58e44824f33d7be974371a504d1cdca5,Namespace:kube-system,Attempt:0,}" Oct 13 05:36:26.736907 containerd[2611]: time="2025-10-13T05:36:26.736876270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-a-8f52350bac,Uid:0925e57568a7e89c4e5e71b12faeae12,Namespace:kube-system,Attempt:0,}" Oct 13 05:36:26.767498 kubelet[3685]: E1013 05:36:26.767453 3685 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-8f52350bac?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="800ms" Oct 13 05:36:26.956552 kubelet[3685]: I1013 05:36:26.956526 3685 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.956881 kubelet[3685]: E1013 05:36:26.956848 3685 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:26.972686 kubelet[3685]: E1013 05:36:26.972581 3685 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:36:27.081757 containerd[2611]: time="2025-10-13T05:36:27.081684524Z" level=info msg="connecting to shim a1172352aa2e53a8f27a1309219ce3afd92775578a2a70ed1787308f3bf8f4e8" address="unix:///run/containerd/s/d5693c2b6d65d0322bc770b1d7dd203fda8ecb1a81df16bf69b1fa1b0ca7ec7c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:27.097981 containerd[2611]: time="2025-10-13T05:36:27.097902078Z" level=info msg="connecting to shim 9000f121385cef24b4241ee3f2c5448bcdadeeab6006d85b961e1939fc8ddc1d" address="unix:///run/containerd/s/58e98bf908c3bdd9766042f0789336f82c8544c5eba55df4784d7d8006a1fc91" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:27.102801 containerd[2611]: time="2025-10-13T05:36:27.102363608Z" level=info msg="connecting to shim 8be30b92f98d1063d44fe1fadd4d2f5a3de8b33c227fbb25f7ea6759c74f2d2c" address="unix:///run/containerd/s/ac006659bbbcc24f30ee31f2073758a08ddc1861abc4937576ac18e7a1ed9ef9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:27.128385 systemd[1]: Started cri-containerd-a1172352aa2e53a8f27a1309219ce3afd92775578a2a70ed1787308f3bf8f4e8.scope - libcontainer container a1172352aa2e53a8f27a1309219ce3afd92775578a2a70ed1787308f3bf8f4e8. Oct 13 05:36:27.135316 systemd[1]: Started cri-containerd-8be30b92f98d1063d44fe1fadd4d2f5a3de8b33c227fbb25f7ea6759c74f2d2c.scope - libcontainer container 8be30b92f98d1063d44fe1fadd4d2f5a3de8b33c227fbb25f7ea6759c74f2d2c. Oct 13 05:36:27.137169 systemd[1]: Started cri-containerd-9000f121385cef24b4241ee3f2c5448bcdadeeab6006d85b961e1939fc8ddc1d.scope - libcontainer container 9000f121385cef24b4241ee3f2c5448bcdadeeab6006d85b961e1939fc8ddc1d. Oct 13 05:36:27.199887 containerd[2611]: time="2025-10-13T05:36:27.199861967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-a-8f52350bac,Uid:58e44824f33d7be974371a504d1cdca5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8be30b92f98d1063d44fe1fadd4d2f5a3de8b33c227fbb25f7ea6759c74f2d2c\"" Oct 13 05:36:27.203936 containerd[2611]: time="2025-10-13T05:36:27.203910369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-a-8f52350bac,Uid:01f98fa0dfe171d27a48d43e7718f9f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1172352aa2e53a8f27a1309219ce3afd92775578a2a70ed1787308f3bf8f4e8\"" Oct 13 05:36:27.218848 containerd[2611]: time="2025-10-13T05:36:27.218811071Z" level=info msg="CreateContainer within sandbox \"8be30b92f98d1063d44fe1fadd4d2f5a3de8b33c227fbb25f7ea6759c74f2d2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:36:27.225907 containerd[2611]: time="2025-10-13T05:36:27.225350502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-a-8f52350bac,Uid:0925e57568a7e89c4e5e71b12faeae12,Namespace:kube-system,Attempt:0,} returns sandbox id \"9000f121385cef24b4241ee3f2c5448bcdadeeab6006d85b961e1939fc8ddc1d\"" Oct 13 05:36:27.225907 containerd[2611]: time="2025-10-13T05:36:27.225485292Z" level=info msg="CreateContainer within sandbox \"a1172352aa2e53a8f27a1309219ce3afd92775578a2a70ed1787308f3bf8f4e8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:36:27.233373 containerd[2611]: time="2025-10-13T05:36:27.233354258Z" level=info msg="CreateContainer within sandbox \"9000f121385cef24b4241ee3f2c5448bcdadeeab6006d85b961e1939fc8ddc1d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:36:27.253281 containerd[2611]: time="2025-10-13T05:36:27.253255467Z" level=info msg="Container 3ee56de3e8ce4c8d69cf94edcc4f39e25ea2204f2a9c7dd675258e2b8f54edb6: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:27.260122 kubelet[3685]: E1013 05:36:27.260095 3685 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:36:27.262107 containerd[2611]: time="2025-10-13T05:36:27.262084260Z" level=info msg="Container 8cf487d991440d6de6331b3e9e1864494ccaf157fba96eb276047e632d70c4ab: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:27.284462 containerd[2611]: time="2025-10-13T05:36:27.284435708Z" level=info msg="CreateContainer within sandbox \"a1172352aa2e53a8f27a1309219ce3afd92775578a2a70ed1787308f3bf8f4e8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8cf487d991440d6de6331b3e9e1864494ccaf157fba96eb276047e632d70c4ab\"" Oct 13 05:36:27.285461 containerd[2611]: time="2025-10-13T05:36:27.285365873Z" level=info msg="StartContainer for \"8cf487d991440d6de6331b3e9e1864494ccaf157fba96eb276047e632d70c4ab\"" Oct 13 05:36:27.285849 containerd[2611]: time="2025-10-13T05:36:27.285827646Z" level=info msg="Container 731b22556deb8ceca7ff2f948b22fc7eafd5b961c5b8f209cdcf7f1e09e36058: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:27.286847 containerd[2611]: time="2025-10-13T05:36:27.286741626Z" level=info msg="connecting to shim 8cf487d991440d6de6331b3e9e1864494ccaf157fba96eb276047e632d70c4ab" address="unix:///run/containerd/s/d5693c2b6d65d0322bc770b1d7dd203fda8ecb1a81df16bf69b1fa1b0ca7ec7c" protocol=ttrpc version=3 Oct 13 05:36:27.298514 containerd[2611]: time="2025-10-13T05:36:27.298487186Z" level=info msg="CreateContainer within sandbox \"8be30b92f98d1063d44fe1fadd4d2f5a3de8b33c227fbb25f7ea6759c74f2d2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ee56de3e8ce4c8d69cf94edcc4f39e25ea2204f2a9c7dd675258e2b8f54edb6\"" Oct 13 05:36:27.298950 containerd[2611]: time="2025-10-13T05:36:27.298917025Z" level=info msg="StartContainer for \"3ee56de3e8ce4c8d69cf94edcc4f39e25ea2204f2a9c7dd675258e2b8f54edb6\"" Oct 13 05:36:27.301364 containerd[2611]: time="2025-10-13T05:36:27.301145217Z" level=info msg="connecting to shim 3ee56de3e8ce4c8d69cf94edcc4f39e25ea2204f2a9c7dd675258e2b8f54edb6" address="unix:///run/containerd/s/ac006659bbbcc24f30ee31f2073758a08ddc1861abc4937576ac18e7a1ed9ef9" protocol=ttrpc version=3 Oct 13 05:36:27.306624 containerd[2611]: time="2025-10-13T05:36:27.306588615Z" level=info msg="CreateContainer within sandbox \"9000f121385cef24b4241ee3f2c5448bcdadeeab6006d85b961e1939fc8ddc1d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"731b22556deb8ceca7ff2f948b22fc7eafd5b961c5b8f209cdcf7f1e09e36058\"" Oct 13 05:36:27.308395 systemd[1]: Started cri-containerd-8cf487d991440d6de6331b3e9e1864494ccaf157fba96eb276047e632d70c4ab.scope - libcontainer container 8cf487d991440d6de6331b3e9e1864494ccaf157fba96eb276047e632d70c4ab. Oct 13 05:36:27.308811 containerd[2611]: time="2025-10-13T05:36:27.308470883Z" level=info msg="StartContainer for \"731b22556deb8ceca7ff2f948b22fc7eafd5b961c5b8f209cdcf7f1e09e36058\"" Oct 13 05:36:27.310178 containerd[2611]: time="2025-10-13T05:36:27.310140955Z" level=info msg="connecting to shim 731b22556deb8ceca7ff2f948b22fc7eafd5b961c5b8f209cdcf7f1e09e36058" address="unix:///run/containerd/s/58e98bf908c3bdd9766042f0789336f82c8544c5eba55df4784d7d8006a1fc91" protocol=ttrpc version=3 Oct 13 05:36:27.323753 systemd[1]: Started cri-containerd-3ee56de3e8ce4c8d69cf94edcc4f39e25ea2204f2a9c7dd675258e2b8f54edb6.scope - libcontainer container 3ee56de3e8ce4c8d69cf94edcc4f39e25ea2204f2a9c7dd675258e2b8f54edb6. Oct 13 05:36:27.336337 systemd[1]: Started cri-containerd-731b22556deb8ceca7ff2f948b22fc7eafd5b961c5b8f209cdcf7f1e09e36058.scope - libcontainer container 731b22556deb8ceca7ff2f948b22fc7eafd5b961c5b8f209cdcf7f1e09e36058. Oct 13 05:36:27.413412 containerd[2611]: time="2025-10-13T05:36:27.413385676Z" level=info msg="StartContainer for \"731b22556deb8ceca7ff2f948b22fc7eafd5b961c5b8f209cdcf7f1e09e36058\" returns successfully" Oct 13 05:36:27.423098 containerd[2611]: time="2025-10-13T05:36:27.423077827Z" level=info msg="StartContainer for \"3ee56de3e8ce4c8d69cf94edcc4f39e25ea2204f2a9c7dd675258e2b8f54edb6\" returns successfully" Oct 13 05:36:27.431490 containerd[2611]: time="2025-10-13T05:36:27.431457599Z" level=info msg="StartContainer for \"8cf487d991440d6de6331b3e9e1864494ccaf157fba96eb276047e632d70c4ab\" returns successfully" Oct 13 05:36:27.439528 kubelet[3685]: E1013 05:36:27.439475 3685 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-a-8f52350bac&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:36:27.759224 kubelet[3685]: I1013 05:36:27.759143 3685 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:28.286258 kubelet[3685]: E1013 05:36:28.285888 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:28.290167 kubelet[3685]: E1013 05:36:28.289984 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:28.292019 kubelet[3685]: E1013 05:36:28.291976 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:29.295630 kubelet[3685]: E1013 05:36:29.295600 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:29.295982 kubelet[3685]: E1013 05:36:29.295956 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:29.298054 kubelet[3685]: E1013 05:36:29.298022 3685 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:29.299641 kubelet[3685]: E1013 05:36:29.299593 3685 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.0-a-8f52350bac\" not found" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:29.332949 kubelet[3685]: I1013 05:36:29.332803 3685 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:29.332949 kubelet[3685]: E1013 05:36:29.332831 3685 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487.0.0-a-8f52350bac\": node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:29.345401 kubelet[3685]: E1013 05:36:29.345381 3685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:29.446369 kubelet[3685]: E1013 05:36:29.446341 3685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:29.546974 kubelet[3685]: E1013 05:36:29.546869 3685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:29.647810 kubelet[3685]: E1013 05:36:29.647778 3685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:29.748433 kubelet[3685]: E1013 05:36:29.748385 3685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:29.849158 kubelet[3685]: E1013 05:36:29.849044 3685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:29.949742 kubelet[3685]: E1013 05:36:29.949705 3685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:30.062538 kubelet[3685]: I1013 05:36:30.062513 3685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.067338 kubelet[3685]: E1013 05:36:30.067311 3685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.067338 kubelet[3685]: I1013 05:36:30.067333 3685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.068742 kubelet[3685]: E1013 05:36:30.068713 3685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.068742 kubelet[3685]: I1013 05:36:30.068737 3685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.070118 kubelet[3685]: E1013 05:36:30.070092 3685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-a-8f52350bac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.120398 kubelet[3685]: I1013 05:36:30.120320 3685 apiserver.go:52] "Watching apiserver" Oct 13 05:36:30.167169 kubelet[3685]: I1013 05:36:30.167148 3685 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:36:30.294602 kubelet[3685]: I1013 05:36:30.294396 3685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.294602 kubelet[3685]: I1013 05:36:30.294575 3685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.295026 kubelet[3685]: I1013 05:36:30.295005 3685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:30.302817 kubelet[3685]: I1013 05:36:30.302770 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:30.311988 kubelet[3685]: I1013 05:36:30.311679 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:30.312342 kubelet[3685]: I1013 05:36:30.312327 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:31.240719 systemd[1]: Reload requested from client PID 3967 ('systemctl') (unit session-9.scope)... Oct 13 05:36:31.240733 systemd[1]: Reloading... Oct 13 05:36:31.325234 zram_generator::config[4011]: No configuration found. Oct 13 05:36:31.525695 systemd[1]: Reloading finished in 284 ms. Oct 13 05:36:31.557185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:31.577075 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:36:31.577313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:31.577364 systemd[1]: kubelet.service: Consumed 617ms CPU time, 128.6M memory peak. Oct 13 05:36:31.578762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:32.675416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:32.684476 (kubelet)[4082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:36:32.724391 kubelet[4082]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:36:32.724391 kubelet[4082]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:36:32.724391 kubelet[4082]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:36:32.724727 kubelet[4082]: I1013 05:36:32.724451 4082 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:36:32.731778 kubelet[4082]: I1013 05:36:32.731754 4082 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:36:32.731778 kubelet[4082]: I1013 05:36:32.731774 4082 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:36:32.731973 kubelet[4082]: I1013 05:36:32.731962 4082 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:36:32.732885 kubelet[4082]: I1013 05:36:32.732869 4082 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:36:32.736887 kubelet[4082]: I1013 05:36:32.736293 4082 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:36:32.744147 kubelet[4082]: I1013 05:36:32.744130 4082 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:36:32.746137 kubelet[4082]: I1013 05:36:32.746122 4082 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:36:32.746387 kubelet[4082]: I1013 05:36:32.746371 4082 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:36:32.746536 kubelet[4082]: I1013 05:36:32.746424 4082 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-a-8f52350bac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:36:32.746621 kubelet[4082]: I1013 05:36:32.746616 4082 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:36:32.746654 kubelet[4082]: I1013 05:36:32.746651 4082 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:36:32.746703 kubelet[4082]: I1013 05:36:32.746700 4082 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:36:32.746825 kubelet[4082]: I1013 05:36:32.746819 4082 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:36:32.746858 kubelet[4082]: I1013 05:36:32.746854 4082 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:36:32.746902 kubelet[4082]: I1013 05:36:32.746898 4082 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:36:32.746942 kubelet[4082]: I1013 05:36:32.746937 4082 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:36:32.749335 kubelet[4082]: I1013 05:36:32.749315 4082 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:36:32.749824 kubelet[4082]: I1013 05:36:32.749811 4082 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:36:32.753534 kubelet[4082]: I1013 05:36:32.753517 4082 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:36:32.753619 kubelet[4082]: I1013 05:36:32.753552 4082 server.go:1289] "Started kubelet" Oct 13 05:36:32.757539 kubelet[4082]: I1013 05:36:32.756542 4082 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:36:32.768292 kubelet[4082]: I1013 05:36:32.768257 4082 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:36:32.769073 kubelet[4082]: I1013 05:36:32.769054 4082 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:36:32.771944 kubelet[4082]: I1013 05:36:32.771897 4082 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:36:32.772073 kubelet[4082]: I1013 05:36:32.772061 4082 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:36:32.773729 kubelet[4082]: I1013 05:36:32.773702 4082 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:36:32.774609 kubelet[4082]: I1013 05:36:32.774576 4082 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:36:32.774880 kubelet[4082]: E1013 05:36:32.774847 4082 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-8f52350bac\" not found" Oct 13 05:36:32.776584 kubelet[4082]: I1013 05:36:32.776570 4082 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:36:32.777005 kubelet[4082]: I1013 05:36:32.776670 4082 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:36:32.779911 kubelet[4082]: I1013 05:36:32.779895 4082 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:36:32.780112 kubelet[4082]: I1013 05:36:32.780097 4082 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:36:32.784183 kubelet[4082]: E1013 05:36:32.783946 4082 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:36:32.786816 kubelet[4082]: I1013 05:36:32.786702 4082 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:36:32.793399 kubelet[4082]: I1013 05:36:32.793373 4082 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:36:32.798614 kubelet[4082]: I1013 05:36:32.798588 4082 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:36:32.798614 kubelet[4082]: I1013 05:36:32.798608 4082 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:36:32.798713 kubelet[4082]: I1013 05:36:32.798625 4082 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:36:32.798713 kubelet[4082]: I1013 05:36:32.798631 4082 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:36:32.798713 kubelet[4082]: E1013 05:36:32.798666 4082 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:36:32.827131 kubelet[4082]: I1013 05:36:32.827106 4082 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:36:32.827271 kubelet[4082]: I1013 05:36:32.827239 4082 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:36:32.827271 kubelet[4082]: I1013 05:36:32.827261 4082 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:36:32.827380 kubelet[4082]: I1013 05:36:32.827368 4082 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:36:32.827415 kubelet[4082]: I1013 05:36:32.827377 4082 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:36:32.827415 kubelet[4082]: I1013 05:36:32.827392 4082 policy_none.go:49] "None policy: Start" Oct 13 05:36:32.827415 kubelet[4082]: I1013 05:36:32.827402 4082 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:36:32.827415 kubelet[4082]: I1013 05:36:32.827410 4082 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:36:32.827514 kubelet[4082]: I1013 05:36:32.827493 4082 state_mem.go:75] "Updated machine memory state" Oct 13 05:36:32.830327 kubelet[4082]: E1013 05:36:32.830308 4082 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:36:32.830432 kubelet[4082]: I1013 05:36:32.830421 4082 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:36:32.830470 kubelet[4082]: I1013 05:36:32.830432 4082 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:36:32.831514 kubelet[4082]: I1013 05:36:32.831191 4082 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:36:32.833336 kubelet[4082]: E1013 05:36:32.833316 4082 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:36:32.900759 kubelet[4082]: I1013 05:36:32.899270 4082 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.901051 kubelet[4082]: I1013 05:36:32.901024 4082 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.901388 kubelet[4082]: I1013 05:36:32.901371 4082 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.904784 kubelet[4082]: I1013 05:36:32.904572 4082 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:32.904784 kubelet[4082]: E1013 05:36:32.904634 4082 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-a-8f52350bac\" already exists" pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.914241 kubelet[4082]: I1013 05:36:32.913606 4082 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:32.914241 kubelet[4082]: E1013 05:36:32.913652 4082 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" already exists" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.914381 kubelet[4082]: I1013 05:36:32.914359 4082 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:32.914429 kubelet[4082]: E1013 05:36:32.914397 4082 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.937004 kubelet[4082]: I1013 05:36:32.936905 4082 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.946743 kubelet[4082]: I1013 05:36:32.946674 4082 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.946743 kubelet[4082]: I1013 05:36:32.946727 4082 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.977937 kubelet[4082]: I1013 05:36:32.977909 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01f98fa0dfe171d27a48d43e7718f9f9-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" (UID: \"01f98fa0dfe171d27a48d43e7718f9f9\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.977937 kubelet[4082]: I1013 05:36:32.977937 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.978044 kubelet[4082]: I1013 05:36:32.977956 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.978044 kubelet[4082]: I1013 05:36:32.977972 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01f98fa0dfe171d27a48d43e7718f9f9-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" (UID: \"01f98fa0dfe171d27a48d43e7718f9f9\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.978044 kubelet[4082]: I1013 05:36:32.977999 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01f98fa0dfe171d27a48d43e7718f9f9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" (UID: \"01f98fa0dfe171d27a48d43e7718f9f9\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.978044 kubelet[4082]: I1013 05:36:32.978014 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.978044 kubelet[4082]: I1013 05:36:32.978036 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.978160 kubelet[4082]: I1013 05:36:32.978055 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58e44824f33d7be974371a504d1cdca5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-a-8f52350bac\" (UID: \"58e44824f33d7be974371a504d1cdca5\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:32.978160 kubelet[4082]: I1013 05:36:32.978073 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0925e57568a7e89c4e5e71b12faeae12-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-a-8f52350bac\" (UID: \"0925e57568a7e89c4e5e71b12faeae12\") " pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:33.754013 kubelet[4082]: I1013 05:36:33.753984 4082 apiserver.go:52] "Watching apiserver" Oct 13 05:36:33.777629 kubelet[4082]: I1013 05:36:33.777605 4082 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:36:33.812518 kubelet[4082]: I1013 05:36:33.812500 4082 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:33.812938 kubelet[4082]: I1013 05:36:33.812910 4082 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:33.823292 kubelet[4082]: I1013 05:36:33.823271 4082 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:33.823454 kubelet[4082]: E1013 05:36:33.823422 4082 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-a-8f52350bac\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:33.824978 kubelet[4082]: I1013 05:36:33.824953 4082 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:33.825184 kubelet[4082]: E1013 05:36:33.825126 4082 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-a-8f52350bac\" already exists" pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" Oct 13 05:36:33.839597 kubelet[4082]: I1013 05:36:33.839109 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-a-8f52350bac" podStartSLOduration=3.8390963830000002 podStartE2EDuration="3.839096383s" podCreationTimestamp="2025-10-13 05:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:36:33.829913707 +0000 UTC m=+1.140343685" watchObservedRunningTime="2025-10-13 05:36:33.839096383 +0000 UTC m=+1.149526346" Oct 13 05:36:33.847670 kubelet[4082]: I1013 05:36:33.847435 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-8f52350bac" podStartSLOduration=3.847424417 podStartE2EDuration="3.847424417s" podCreationTimestamp="2025-10-13 05:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:36:33.83909424 +0000 UTC m=+1.149524193" watchObservedRunningTime="2025-10-13 05:36:33.847424417 +0000 UTC m=+1.157854484" Oct 13 05:36:33.856299 kubelet[4082]: I1013 05:36:33.856253 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-a-8f52350bac" podStartSLOduration=3.856242334 podStartE2EDuration="3.856242334s" podCreationTimestamp="2025-10-13 05:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:36:33.84785183 +0000 UTC m=+1.158281789" watchObservedRunningTime="2025-10-13 05:36:33.856242334 +0000 UTC m=+1.166672289" Oct 13 05:36:37.871426 kubelet[4082]: I1013 05:36:37.871383 4082 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:36:37.871865 containerd[2611]: time="2025-10-13T05:36:37.871818353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:36:37.872225 kubelet[4082]: I1013 05:36:37.872167 4082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:36:38.850663 systemd[1]: Created slice kubepods-besteffort-pod19d89149_6f33_424e_8779_acc60bfd50e1.slice - libcontainer container kubepods-besteffort-pod19d89149_6f33_424e_8779_acc60bfd50e1.slice. Oct 13 05:36:38.920401 kubelet[4082]: I1013 05:36:38.920362 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19d89149-6f33-424e-8779-acc60bfd50e1-kube-proxy\") pod \"kube-proxy-zb49z\" (UID: \"19d89149-6f33-424e-8779-acc60bfd50e1\") " pod="kube-system/kube-proxy-zb49z" Oct 13 05:36:38.920737 kubelet[4082]: I1013 05:36:38.920394 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19d89149-6f33-424e-8779-acc60bfd50e1-xtables-lock\") pod \"kube-proxy-zb49z\" (UID: \"19d89149-6f33-424e-8779-acc60bfd50e1\") " pod="kube-system/kube-proxy-zb49z" Oct 13 05:36:38.920737 kubelet[4082]: I1013 05:36:38.920445 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkw8k\" (UniqueName: \"kubernetes.io/projected/19d89149-6f33-424e-8779-acc60bfd50e1-kube-api-access-nkw8k\") pod \"kube-proxy-zb49z\" (UID: \"19d89149-6f33-424e-8779-acc60bfd50e1\") " pod="kube-system/kube-proxy-zb49z" Oct 13 05:36:38.920737 kubelet[4082]: I1013 05:36:38.920468 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19d89149-6f33-424e-8779-acc60bfd50e1-lib-modules\") pod \"kube-proxy-zb49z\" (UID: \"19d89149-6f33-424e-8779-acc60bfd50e1\") " pod="kube-system/kube-proxy-zb49z" Oct 13 05:36:39.078964 systemd[1]: Created slice kubepods-besteffort-pod7511b5e4_ea17_4137_84a8_43a9c511528a.slice - libcontainer container kubepods-besteffort-pod7511b5e4_ea17_4137_84a8_43a9c511528a.slice. Oct 13 05:36:39.122524 kubelet[4082]: I1013 05:36:39.122437 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7511b5e4-ea17-4137-84a8-43a9c511528a-var-lib-calico\") pod \"tigera-operator-755d956888-tzbmt\" (UID: \"7511b5e4-ea17-4137-84a8-43a9c511528a\") " pod="tigera-operator/tigera-operator-755d956888-tzbmt" Oct 13 05:36:39.122524 kubelet[4082]: I1013 05:36:39.122467 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcm6j\" (UniqueName: \"kubernetes.io/projected/7511b5e4-ea17-4137-84a8-43a9c511528a-kube-api-access-bcm6j\") pod \"tigera-operator-755d956888-tzbmt\" (UID: \"7511b5e4-ea17-4137-84a8-43a9c511528a\") " pod="tigera-operator/tigera-operator-755d956888-tzbmt" Oct 13 05:36:39.157069 containerd[2611]: time="2025-10-13T05:36:39.157024816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zb49z,Uid:19d89149-6f33-424e-8779-acc60bfd50e1,Namespace:kube-system,Attempt:0,}" Oct 13 05:36:39.201169 containerd[2611]: time="2025-10-13T05:36:39.201121377Z" level=info msg="connecting to shim 0a04d56729740b0e3ec17a2c1a57a4083b5407dc732a70b84ff2c8f376580639" address="unix:///run/containerd/s/dbf6ce339b1b51622d0a6f1dc179ef4fc0bb2fa351e88c35ee0ed307496e6dd9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:39.226388 systemd[1]: Started cri-containerd-0a04d56729740b0e3ec17a2c1a57a4083b5407dc732a70b84ff2c8f376580639.scope - libcontainer container 0a04d56729740b0e3ec17a2c1a57a4083b5407dc732a70b84ff2c8f376580639. Oct 13 05:36:39.255143 containerd[2611]: time="2025-10-13T05:36:39.255119094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zb49z,Uid:19d89149-6f33-424e-8779-acc60bfd50e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a04d56729740b0e3ec17a2c1a57a4083b5407dc732a70b84ff2c8f376580639\"" Oct 13 05:36:39.265236 containerd[2611]: time="2025-10-13T05:36:39.265052264Z" level=info msg="CreateContainer within sandbox \"0a04d56729740b0e3ec17a2c1a57a4083b5407dc732a70b84ff2c8f376580639\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:36:39.283771 containerd[2611]: time="2025-10-13T05:36:39.283697691Z" level=info msg="Container 6b1c0abfa54ccb2ef493415a1cfe58276e62c152d241c3f3da6503e97058a0d6: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:39.299614 containerd[2611]: time="2025-10-13T05:36:39.299586421Z" level=info msg="CreateContainer within sandbox \"0a04d56729740b0e3ec17a2c1a57a4083b5407dc732a70b84ff2c8f376580639\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b1c0abfa54ccb2ef493415a1cfe58276e62c152d241c3f3da6503e97058a0d6\"" Oct 13 05:36:39.300104 containerd[2611]: time="2025-10-13T05:36:39.300080059Z" level=info msg="StartContainer for \"6b1c0abfa54ccb2ef493415a1cfe58276e62c152d241c3f3da6503e97058a0d6\"" Oct 13 05:36:39.301482 containerd[2611]: time="2025-10-13T05:36:39.301457248Z" level=info msg="connecting to shim 6b1c0abfa54ccb2ef493415a1cfe58276e62c152d241c3f3da6503e97058a0d6" address="unix:///run/containerd/s/dbf6ce339b1b51622d0a6f1dc179ef4fc0bb2fa351e88c35ee0ed307496e6dd9" protocol=ttrpc version=3 Oct 13 05:36:39.319366 systemd[1]: Started cri-containerd-6b1c0abfa54ccb2ef493415a1cfe58276e62c152d241c3f3da6503e97058a0d6.scope - libcontainer container 6b1c0abfa54ccb2ef493415a1cfe58276e62c152d241c3f3da6503e97058a0d6. Oct 13 05:36:39.351780 containerd[2611]: time="2025-10-13T05:36:39.351732633Z" level=info msg="StartContainer for \"6b1c0abfa54ccb2ef493415a1cfe58276e62c152d241c3f3da6503e97058a0d6\" returns successfully" Oct 13 05:36:39.383740 containerd[2611]: time="2025-10-13T05:36:39.383532989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-tzbmt,Uid:7511b5e4-ea17-4137-84a8-43a9c511528a,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:36:39.425677 containerd[2611]: time="2025-10-13T05:36:39.425645447Z" level=info msg="connecting to shim 806cce1c54dd615c5d315121c4cc871a0b819e0226fbf055ec120c0503b99939" address="unix:///run/containerd/s/2e049eb130476f05b70390ae25220f19cc107fb48dbbde7defddbe816463f31a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:39.453360 systemd[1]: Started cri-containerd-806cce1c54dd615c5d315121c4cc871a0b819e0226fbf055ec120c0503b99939.scope - libcontainer container 806cce1c54dd615c5d315121c4cc871a0b819e0226fbf055ec120c0503b99939. Oct 13 05:36:39.494381 containerd[2611]: time="2025-10-13T05:36:39.494354637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-tzbmt,Uid:7511b5e4-ea17-4137-84a8-43a9c511528a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"806cce1c54dd615c5d315121c4cc871a0b819e0226fbf055ec120c0503b99939\"" Oct 13 05:36:39.497227 containerd[2611]: time="2025-10-13T05:36:39.495913321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:36:41.307602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542833049.mount: Deactivated successfully. Oct 13 05:36:41.746098 containerd[2611]: time="2025-10-13T05:36:41.746059019Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:41.752144 containerd[2611]: time="2025-10-13T05:36:41.752107071Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:36:41.755121 containerd[2611]: time="2025-10-13T05:36:41.755079320Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:41.759649 containerd[2611]: time="2025-10-13T05:36:41.759601000Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:41.760290 containerd[2611]: time="2025-10-13T05:36:41.760263046Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.264309742s" Oct 13 05:36:41.760353 containerd[2611]: time="2025-10-13T05:36:41.760290648Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:36:41.766257 containerd[2611]: time="2025-10-13T05:36:41.766227640Z" level=info msg="CreateContainer within sandbox \"806cce1c54dd615c5d315121c4cc871a0b819e0226fbf055ec120c0503b99939\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:36:41.786783 containerd[2611]: time="2025-10-13T05:36:41.786591869Z" level=info msg="Container 292b10713e33e7b5605bf2ba86bb50bfd30a28da96f6e9b10de5a8576d73b903: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:41.801054 containerd[2611]: time="2025-10-13T05:36:41.801026808Z" level=info msg="CreateContainer within sandbox \"806cce1c54dd615c5d315121c4cc871a0b819e0226fbf055ec120c0503b99939\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"292b10713e33e7b5605bf2ba86bb50bfd30a28da96f6e9b10de5a8576d73b903\"" Oct 13 05:36:41.801627 containerd[2611]: time="2025-10-13T05:36:41.801601980Z" level=info msg="StartContainer for \"292b10713e33e7b5605bf2ba86bb50bfd30a28da96f6e9b10de5a8576d73b903\"" Oct 13 05:36:41.802434 containerd[2611]: time="2025-10-13T05:36:41.802402661Z" level=info msg="connecting to shim 292b10713e33e7b5605bf2ba86bb50bfd30a28da96f6e9b10de5a8576d73b903" address="unix:///run/containerd/s/2e049eb130476f05b70390ae25220f19cc107fb48dbbde7defddbe816463f31a" protocol=ttrpc version=3 Oct 13 05:36:41.822372 systemd[1]: Started cri-containerd-292b10713e33e7b5605bf2ba86bb50bfd30a28da96f6e9b10de5a8576d73b903.scope - libcontainer container 292b10713e33e7b5605bf2ba86bb50bfd30a28da96f6e9b10de5a8576d73b903. Oct 13 05:36:41.851939 containerd[2611]: time="2025-10-13T05:36:41.851884563Z" level=info msg="StartContainer for \"292b10713e33e7b5605bf2ba86bb50bfd30a28da96f6e9b10de5a8576d73b903\" returns successfully" Oct 13 05:36:42.855875 kubelet[4082]: I1013 05:36:42.855695 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zb49z" podStartSLOduration=4.855677024 podStartE2EDuration="4.855677024s" podCreationTimestamp="2025-10-13 05:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:36:39.850136684 +0000 UTC m=+7.160566633" watchObservedRunningTime="2025-10-13 05:36:42.855677024 +0000 UTC m=+10.166106975" Oct 13 05:36:42.857353 kubelet[4082]: I1013 05:36:42.855963 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-tzbmt" podStartSLOduration=1.5903596530000002 podStartE2EDuration="3.855955538s" podCreationTimestamp="2025-10-13 05:36:39 +0000 UTC" firstStartedPulling="2025-10-13 05:36:39.495319026 +0000 UTC m=+6.805748978" lastFinishedPulling="2025-10-13 05:36:41.760914906 +0000 UTC m=+9.071344863" observedRunningTime="2025-10-13 05:36:42.855312273 +0000 UTC m=+10.165742231" watchObservedRunningTime="2025-10-13 05:36:42.855955538 +0000 UTC m=+10.166385488" Oct 13 05:36:47.472628 sudo[3064]: pam_unix(sudo:session): session closed for user root Oct 13 05:36:47.579475 sshd[3063]: Connection closed by 10.200.16.10 port 39702 Oct 13 05:36:47.580012 sshd-session[3060]: pam_unix(sshd:session): session closed for user core Oct 13 05:36:47.585677 systemd-logind[2582]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:36:47.586818 systemd[1]: sshd@6-10.200.8.43:22-10.200.16.10:39702.service: Deactivated successfully. Oct 13 05:36:47.591075 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:36:47.591981 systemd[1]: session-9.scope: Consumed 4.154s CPU time, 230.4M memory peak. Oct 13 05:36:47.597261 systemd-logind[2582]: Removed session 9. Oct 13 05:36:51.106028 systemd[1]: Created slice kubepods-besteffort-pod2e02fd30_60ad_477a_837a_dca4eeb8ed63.slice - libcontainer container kubepods-besteffort-pod2e02fd30_60ad_477a_837a_dca4eeb8ed63.slice. Oct 13 05:36:51.199529 kubelet[4082]: I1013 05:36:51.199488 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e02fd30-60ad-477a-837a-dca4eeb8ed63-tigera-ca-bundle\") pod \"calico-typha-5d7746cbd7-ksgt7\" (UID: \"2e02fd30-60ad-477a-837a-dca4eeb8ed63\") " pod="calico-system/calico-typha-5d7746cbd7-ksgt7" Oct 13 05:36:51.199844 kubelet[4082]: I1013 05:36:51.199541 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2e02fd30-60ad-477a-837a-dca4eeb8ed63-typha-certs\") pod \"calico-typha-5d7746cbd7-ksgt7\" (UID: \"2e02fd30-60ad-477a-837a-dca4eeb8ed63\") " pod="calico-system/calico-typha-5d7746cbd7-ksgt7" Oct 13 05:36:51.199844 kubelet[4082]: I1013 05:36:51.199563 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwzlp\" (UniqueName: \"kubernetes.io/projected/2e02fd30-60ad-477a-837a-dca4eeb8ed63-kube-api-access-rwzlp\") pod \"calico-typha-5d7746cbd7-ksgt7\" (UID: \"2e02fd30-60ad-477a-837a-dca4eeb8ed63\") " pod="calico-system/calico-typha-5d7746cbd7-ksgt7" Oct 13 05:36:51.410500 containerd[2611]: time="2025-10-13T05:36:51.410394181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d7746cbd7-ksgt7,Uid:2e02fd30-60ad-477a-837a-dca4eeb8ed63,Namespace:calico-system,Attempt:0,}" Oct 13 05:36:51.463723 containerd[2611]: time="2025-10-13T05:36:51.463673883Z" level=info msg="connecting to shim 165255fca1dbae8a6ec62fd86f02997182472484a258e0f220004cdd1a81432f" address="unix:///run/containerd/s/49d4766fb8b2b6db2aada9a7888131736f4a6cc77753487eae2ba600f309eaa5" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:51.501728 kubelet[4082]: I1013 05:36:51.501694 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-cni-net-dir\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501728 kubelet[4082]: I1013 05:36:51.501731 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-xtables-lock\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501863 kubelet[4082]: I1013 05:36:51.501748 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-policysync\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501863 kubelet[4082]: I1013 05:36:51.501763 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ef3017-c6a1-4e2a-a60e-3688c08709ad-tigera-ca-bundle\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501863 kubelet[4082]: I1013 05:36:51.501778 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-cni-bin-dir\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501863 kubelet[4082]: I1013 05:36:51.501792 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/08ef3017-c6a1-4e2a-a60e-3688c08709ad-node-certs\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501863 kubelet[4082]: I1013 05:36:51.501812 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-cni-log-dir\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501981 kubelet[4082]: I1013 05:36:51.501834 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-var-lib-calico\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501981 kubelet[4082]: I1013 05:36:51.501849 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-var-run-calico\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501981 kubelet[4082]: I1013 05:36:51.501866 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcshw\" (UniqueName: \"kubernetes.io/projected/08ef3017-c6a1-4e2a-a60e-3688c08709ad-kube-api-access-vcshw\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501981 kubelet[4082]: I1013 05:36:51.501883 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-flexvol-driver-host\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.501981 kubelet[4082]: I1013 05:36:51.501899 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08ef3017-c6a1-4e2a-a60e-3688c08709ad-lib-modules\") pod \"calico-node-xqksf\" (UID: \"08ef3017-c6a1-4e2a-a60e-3688c08709ad\") " pod="calico-system/calico-node-xqksf" Oct 13 05:36:51.502831 systemd[1]: Created slice kubepods-besteffort-pod08ef3017_c6a1_4e2a_a60e_3688c08709ad.slice - libcontainer container kubepods-besteffort-pod08ef3017_c6a1_4e2a_a60e_3688c08709ad.slice. Oct 13 05:36:51.517670 systemd[1]: Started cri-containerd-165255fca1dbae8a6ec62fd86f02997182472484a258e0f220004cdd1a81432f.scope - libcontainer container 165255fca1dbae8a6ec62fd86f02997182472484a258e0f220004cdd1a81432f. Oct 13 05:36:51.579100 containerd[2611]: time="2025-10-13T05:36:51.579069552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d7746cbd7-ksgt7,Uid:2e02fd30-60ad-477a-837a-dca4eeb8ed63,Namespace:calico-system,Attempt:0,} returns sandbox id \"165255fca1dbae8a6ec62fd86f02997182472484a258e0f220004cdd1a81432f\"" Oct 13 05:36:51.580283 containerd[2611]: time="2025-10-13T05:36:51.580142683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:36:51.604443 kubelet[4082]: E1013 05:36:51.604402 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.604443 kubelet[4082]: W1013 05:36:51.604441 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.604563 kubelet[4082]: E1013 05:36:51.604472 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.605103 kubelet[4082]: E1013 05:36:51.604619 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.605103 kubelet[4082]: W1013 05:36:51.604627 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.605103 kubelet[4082]: E1013 05:36:51.604637 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.605103 kubelet[4082]: E1013 05:36:51.604764 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.605103 kubelet[4082]: W1013 05:36:51.604770 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.605103 kubelet[4082]: E1013 05:36:51.604779 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.605103 kubelet[4082]: E1013 05:36:51.604946 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.605103 kubelet[4082]: W1013 05:36:51.604958 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.605103 kubelet[4082]: E1013 05:36:51.604980 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.612128 kubelet[4082]: E1013 05:36:51.612085 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.612128 kubelet[4082]: W1013 05:36:51.612104 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.612128 kubelet[4082]: E1013 05:36:51.612123 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.616834 kubelet[4082]: E1013 05:36:51.616813 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.616834 kubelet[4082]: W1013 05:36:51.616828 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.616930 kubelet[4082]: E1013 05:36:51.616841 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.734193 kubelet[4082]: E1013 05:36:51.733913 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zl54c" podUID="bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f" Oct 13 05:36:51.801640 kubelet[4082]: E1013 05:36:51.801560 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.801640 kubelet[4082]: W1013 05:36:51.801580 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.801640 kubelet[4082]: E1013 05:36:51.801596 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.802247 kubelet[4082]: E1013 05:36:51.802234 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.802468 kubelet[4082]: W1013 05:36:51.802393 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.802468 kubelet[4082]: E1013 05:36:51.802414 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.803163 kubelet[4082]: E1013 05:36:51.803136 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.803163 kubelet[4082]: W1013 05:36:51.803153 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.803320 kubelet[4082]: E1013 05:36:51.803176 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.803431 kubelet[4082]: E1013 05:36:51.803408 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.803431 kubelet[4082]: W1013 05:36:51.803430 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.803529 kubelet[4082]: E1013 05:36:51.803439 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.803608 kubelet[4082]: E1013 05:36:51.803604 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.803685 kubelet[4082]: W1013 05:36:51.803611 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.803685 kubelet[4082]: E1013 05:36:51.803619 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.803808 kubelet[4082]: E1013 05:36:51.803755 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.803808 kubelet[4082]: W1013 05:36:51.803761 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.803808 kubelet[4082]: E1013 05:36:51.803768 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.805295 kubelet[4082]: E1013 05:36:51.805275 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.805295 kubelet[4082]: W1013 05:36:51.805291 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.805408 kubelet[4082]: E1013 05:36:51.805304 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.805525 kubelet[4082]: E1013 05:36:51.805516 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.805559 kubelet[4082]: W1013 05:36:51.805526 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.805559 kubelet[4082]: E1013 05:36:51.805536 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.805700 kubelet[4082]: E1013 05:36:51.805688 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.805700 kubelet[4082]: W1013 05:36:51.805697 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.805768 kubelet[4082]: E1013 05:36:51.805705 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.805865 kubelet[4082]: E1013 05:36:51.805854 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.805865 kubelet[4082]: W1013 05:36:51.805862 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.805986 kubelet[4082]: E1013 05:36:51.805869 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.805986 kubelet[4082]: E1013 05:36:51.805968 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.805986 kubelet[4082]: W1013 05:36:51.805973 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.805986 kubelet[4082]: E1013 05:36:51.805979 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.806124 kubelet[4082]: E1013 05:36:51.806090 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.806124 kubelet[4082]: W1013 05:36:51.806095 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.806124 kubelet[4082]: E1013 05:36:51.806101 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.806314 kubelet[4082]: E1013 05:36:51.806301 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.806314 kubelet[4082]: W1013 05:36:51.806310 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.806421 kubelet[4082]: E1013 05:36:51.806318 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.806448 kubelet[4082]: E1013 05:36:51.806428 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.806448 kubelet[4082]: W1013 05:36:51.806433 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.806448 kubelet[4082]: E1013 05:36:51.806439 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.806580 kubelet[4082]: E1013 05:36:51.806527 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.806580 kubelet[4082]: W1013 05:36:51.806534 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.806580 kubelet[4082]: E1013 05:36:51.806540 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.806702 kubelet[4082]: E1013 05:36:51.806655 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.806702 kubelet[4082]: W1013 05:36:51.806661 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.806702 kubelet[4082]: E1013 05:36:51.806667 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.806805 kubelet[4082]: E1013 05:36:51.806790 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.806805 kubelet[4082]: W1013 05:36:51.806799 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.806907 kubelet[4082]: E1013 05:36:51.806806 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.806932 kubelet[4082]: E1013 05:36:51.806912 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.806932 kubelet[4082]: W1013 05:36:51.806918 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.806932 kubelet[4082]: E1013 05:36:51.806924 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.807044 kubelet[4082]: E1013 05:36:51.807025 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.807044 kubelet[4082]: W1013 05:36:51.807030 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.807044 kubelet[4082]: E1013 05:36:51.807036 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.807144 kubelet[4082]: E1013 05:36:51.807124 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.807144 kubelet[4082]: W1013 05:36:51.807128 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.807144 kubelet[4082]: E1013 05:36:51.807135 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.807768 containerd[2611]: time="2025-10-13T05:36:51.807699286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xqksf,Uid:08ef3017-c6a1-4e2a-a60e-3688c08709ad,Namespace:calico-system,Attempt:0,}" Oct 13 05:36:51.809072 kubelet[4082]: E1013 05:36:51.809045 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.809072 kubelet[4082]: W1013 05:36:51.809067 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.809183 kubelet[4082]: E1013 05:36:51.809081 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.809183 kubelet[4082]: I1013 05:36:51.809110 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f-kubelet-dir\") pod \"csi-node-driver-zl54c\" (UID: \"bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f\") " pod="calico-system/csi-node-driver-zl54c" Oct 13 05:36:51.809367 kubelet[4082]: E1013 05:36:51.809293 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.809491 kubelet[4082]: W1013 05:36:51.809367 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.809491 kubelet[4082]: E1013 05:36:51.809385 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.809491 kubelet[4082]: I1013 05:36:51.809410 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f-registration-dir\") pod \"csi-node-driver-zl54c\" (UID: \"bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f\") " pod="calico-system/csi-node-driver-zl54c" Oct 13 05:36:51.809653 kubelet[4082]: E1013 05:36:51.809560 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.809653 kubelet[4082]: W1013 05:36:51.809567 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.809653 kubelet[4082]: E1013 05:36:51.809575 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.809653 kubelet[4082]: I1013 05:36:51.809599 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f-varrun\") pod \"csi-node-driver-zl54c\" (UID: \"bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f\") " pod="calico-system/csi-node-driver-zl54c" Oct 13 05:36:51.809823 kubelet[4082]: E1013 05:36:51.809741 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.809823 kubelet[4082]: W1013 05:36:51.809747 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.809823 kubelet[4082]: E1013 05:36:51.809754 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.809823 kubelet[4082]: I1013 05:36:51.809774 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62lvt\" (UniqueName: \"kubernetes.io/projected/bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f-kube-api-access-62lvt\") pod \"csi-node-driver-zl54c\" (UID: \"bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f\") " pod="calico-system/csi-node-driver-zl54c" Oct 13 05:36:51.810046 kubelet[4082]: E1013 05:36:51.809911 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.810046 kubelet[4082]: W1013 05:36:51.809917 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.810046 kubelet[4082]: E1013 05:36:51.809924 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.810046 kubelet[4082]: I1013 05:36:51.809944 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f-socket-dir\") pod \"csi-node-driver-zl54c\" (UID: \"bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f\") " pod="calico-system/csi-node-driver-zl54c" Oct 13 05:36:51.810239 kubelet[4082]: E1013 05:36:51.810055 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.810239 kubelet[4082]: W1013 05:36:51.810061 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.810239 kubelet[4082]: E1013 05:36:51.810068 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.810239 kubelet[4082]: E1013 05:36:51.810162 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.810239 kubelet[4082]: W1013 05:36:51.810167 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.810239 kubelet[4082]: E1013 05:36:51.810173 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.810493 kubelet[4082]: E1013 05:36:51.810346 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.810493 kubelet[4082]: W1013 05:36:51.810352 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.810493 kubelet[4082]: E1013 05:36:51.810359 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.810943 kubelet[4082]: E1013 05:36:51.810923 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.810943 kubelet[4082]: W1013 05:36:51.810941 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.811036 kubelet[4082]: E1013 05:36:51.810954 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.811417 kubelet[4082]: E1013 05:36:51.811401 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.811417 kubelet[4082]: W1013 05:36:51.811417 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.811755 kubelet[4082]: E1013 05:36:51.811432 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.811962 kubelet[4082]: E1013 05:36:51.811948 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.812030 kubelet[4082]: W1013 05:36:51.811963 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.812030 kubelet[4082]: E1013 05:36:51.811976 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.812643 kubelet[4082]: E1013 05:36:51.812593 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.812853 kubelet[4082]: W1013 05:36:51.812728 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.812853 kubelet[4082]: E1013 05:36:51.812744 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.813472 kubelet[4082]: E1013 05:36:51.813374 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.813886 kubelet[4082]: W1013 05:36:51.813802 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.813886 kubelet[4082]: E1013 05:36:51.813825 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.814422 kubelet[4082]: E1013 05:36:51.814338 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.814422 kubelet[4082]: W1013 05:36:51.814387 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.814422 kubelet[4082]: E1013 05:36:51.814400 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.815424 kubelet[4082]: E1013 05:36:51.815144 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.815424 kubelet[4082]: W1013 05:36:51.815170 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.815424 kubelet[4082]: E1013 05:36:51.815182 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.863786 containerd[2611]: time="2025-10-13T05:36:51.863754070Z" level=info msg="connecting to shim 15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368" address="unix:///run/containerd/s/d59bfb3d330d691e0120718a2585c06c5cd810bd70c7b17ba7ca97c36234fb23" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:51.882647 systemd[1]: Started cri-containerd-15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368.scope - libcontainer container 15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368. Oct 13 05:36:51.910792 kubelet[4082]: E1013 05:36:51.910768 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.910792 kubelet[4082]: W1013 05:36:51.910783 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.910905 kubelet[4082]: E1013 05:36:51.910797 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.911172 kubelet[4082]: E1013 05:36:51.911156 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.911172 kubelet[4082]: W1013 05:36:51.911167 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.911372 kubelet[4082]: E1013 05:36:51.911178 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.911527 kubelet[4082]: E1013 05:36:51.911474 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.911654 kubelet[4082]: W1013 05:36:51.911580 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.911654 kubelet[4082]: E1013 05:36:51.911607 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.911866 kubelet[4082]: E1013 05:36:51.911817 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.911866 kubelet[4082]: W1013 05:36:51.911824 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.911866 kubelet[4082]: E1013 05:36:51.911831 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.912069 kubelet[4082]: E1013 05:36:51.912020 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.912069 kubelet[4082]: W1013 05:36:51.912026 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.912069 kubelet[4082]: E1013 05:36:51.912032 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.912364 kubelet[4082]: E1013 05:36:51.912292 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.912364 kubelet[4082]: W1013 05:36:51.912299 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.912364 kubelet[4082]: E1013 05:36:51.912306 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.912471 kubelet[4082]: E1013 05:36:51.912466 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.912538 kubelet[4082]: W1013 05:36:51.912497 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.912538 kubelet[4082]: E1013 05:36:51.912504 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.912650 kubelet[4082]: E1013 05:36:51.912637 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.912650 kubelet[4082]: W1013 05:36:51.912648 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.912702 kubelet[4082]: E1013 05:36:51.912656 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.912805 kubelet[4082]: E1013 05:36:51.912782 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.912805 kubelet[4082]: W1013 05:36:51.912793 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.912805 kubelet[4082]: E1013 05:36:51.912800 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.912962 kubelet[4082]: E1013 05:36:51.912955 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.913002 kubelet[4082]: W1013 05:36:51.912995 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.913032 kubelet[4082]: E1013 05:36:51.913027 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.913194 kubelet[4082]: E1013 05:36:51.913176 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.913194 kubelet[4082]: W1013 05:36:51.913183 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.913194 kubelet[4082]: E1013 05:36:51.913188 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.913441 kubelet[4082]: E1013 05:36:51.913395 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.913441 kubelet[4082]: W1013 05:36:51.913400 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.913441 kubelet[4082]: E1013 05:36:51.913405 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.913573 kubelet[4082]: E1013 05:36:51.913561 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.913573 kubelet[4082]: W1013 05:36:51.913571 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.913641 kubelet[4082]: E1013 05:36:51.913580 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.913704 kubelet[4082]: E1013 05:36:51.913689 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.913704 kubelet[4082]: W1013 05:36:51.913695 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.913751 kubelet[4082]: E1013 05:36:51.913713 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.913825 kubelet[4082]: E1013 05:36:51.913815 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.913825 kubelet[4082]: W1013 05:36:51.913822 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.913877 kubelet[4082]: E1013 05:36:51.913829 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.913971 kubelet[4082]: E1013 05:36:51.913961 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.913971 kubelet[4082]: W1013 05:36:51.913968 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.914016 kubelet[4082]: E1013 05:36:51.913975 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.914189 kubelet[4082]: E1013 05:36:51.914181 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.914445 kubelet[4082]: W1013 05:36:51.914234 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.914445 kubelet[4082]: E1013 05:36:51.914289 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.914445 kubelet[4082]: E1013 05:36:51.914444 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.914523 kubelet[4082]: W1013 05:36:51.914452 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.914523 kubelet[4082]: E1013 05:36:51.914459 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.915089 kubelet[4082]: E1013 05:36:51.914593 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.915089 kubelet[4082]: W1013 05:36:51.914600 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.915089 kubelet[4082]: E1013 05:36:51.914607 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.915089 kubelet[4082]: E1013 05:36:51.914711 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.915089 kubelet[4082]: W1013 05:36:51.914823 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.915089 kubelet[4082]: E1013 05:36:51.914833 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.915870 kubelet[4082]: E1013 05:36:51.915638 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.915870 kubelet[4082]: W1013 05:36:51.915651 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.915870 kubelet[4082]: E1013 05:36:51.915664 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.916181 containerd[2611]: time="2025-10-13T05:36:51.915859933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xqksf,Uid:08ef3017-c6a1-4e2a-a60e-3688c08709ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368\"" Oct 13 05:36:51.916885 kubelet[4082]: E1013 05:36:51.916784 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.916885 kubelet[4082]: W1013 05:36:51.916795 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.916885 kubelet[4082]: E1013 05:36:51.916807 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.917301 kubelet[4082]: E1013 05:36:51.917073 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.917301 kubelet[4082]: W1013 05:36:51.917081 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.917301 kubelet[4082]: E1013 05:36:51.917091 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.918313 kubelet[4082]: E1013 05:36:51.918230 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.918729 kubelet[4082]: W1013 05:36:51.918407 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.918729 kubelet[4082]: E1013 05:36:51.918423 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.919456 kubelet[4082]: E1013 05:36:51.919357 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.919504 kubelet[4082]: W1013 05:36:51.919495 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.919964 kubelet[4082]: E1013 05:36:51.919947 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:51.923166 kubelet[4082]: E1013 05:36:51.923125 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:51.923166 kubelet[4082]: W1013 05:36:51.923135 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:51.923166 kubelet[4082]: E1013 05:36:51.923145 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:52.860281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978907918.mount: Deactivated successfully. Oct 13 05:36:53.499882 containerd[2611]: time="2025-10-13T05:36:53.499843172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:53.503090 containerd[2611]: time="2025-10-13T05:36:53.502969564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:36:53.505829 containerd[2611]: time="2025-10-13T05:36:53.505804231Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:53.509877 containerd[2611]: time="2025-10-13T05:36:53.509810288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:53.510286 containerd[2611]: time="2025-10-13T05:36:53.510250883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.929961758s" Oct 13 05:36:53.510353 containerd[2611]: time="2025-10-13T05:36:53.510286210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:36:53.513280 containerd[2611]: time="2025-10-13T05:36:53.512369573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:36:53.530086 containerd[2611]: time="2025-10-13T05:36:53.530057278Z" level=info msg="CreateContainer within sandbox \"165255fca1dbae8a6ec62fd86f02997182472484a258e0f220004cdd1a81432f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:36:53.552113 containerd[2611]: time="2025-10-13T05:36:53.551102276Z" level=info msg="Container 811cb3338a803fbfc20f64dca20611fdc4d277735091a66426d4056c651a6f46: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:53.565965 containerd[2611]: time="2025-10-13T05:36:53.565937407Z" level=info msg="CreateContainer within sandbox \"165255fca1dbae8a6ec62fd86f02997182472484a258e0f220004cdd1a81432f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"811cb3338a803fbfc20f64dca20611fdc4d277735091a66426d4056c651a6f46\"" Oct 13 05:36:53.566486 containerd[2611]: time="2025-10-13T05:36:53.566451948Z" level=info msg="StartContainer for \"811cb3338a803fbfc20f64dca20611fdc4d277735091a66426d4056c651a6f46\"" Oct 13 05:36:53.567644 containerd[2611]: time="2025-10-13T05:36:53.567619919Z" level=info msg="connecting to shim 811cb3338a803fbfc20f64dca20611fdc4d277735091a66426d4056c651a6f46" address="unix:///run/containerd/s/49d4766fb8b2b6db2aada9a7888131736f4a6cc77753487eae2ba600f309eaa5" protocol=ttrpc version=3 Oct 13 05:36:53.586346 systemd[1]: Started cri-containerd-811cb3338a803fbfc20f64dca20611fdc4d277735091a66426d4056c651a6f46.scope - libcontainer container 811cb3338a803fbfc20f64dca20611fdc4d277735091a66426d4056c651a6f46. Oct 13 05:36:53.629917 containerd[2611]: time="2025-10-13T05:36:53.629835574Z" level=info msg="StartContainer for \"811cb3338a803fbfc20f64dca20611fdc4d277735091a66426d4056c651a6f46\" returns successfully" Oct 13 05:36:53.799505 kubelet[4082]: E1013 05:36:53.799130 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zl54c" podUID="bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f" Oct 13 05:36:53.924252 kubelet[4082]: E1013 05:36:53.923548 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.924252 kubelet[4082]: W1013 05:36:53.923568 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.924252 kubelet[4082]: E1013 05:36:53.923586 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.924652 kubelet[4082]: E1013 05:36:53.924602 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.924652 kubelet[4082]: W1013 05:36:53.924615 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.924652 kubelet[4082]: E1013 05:36:53.924627 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.924891 kubelet[4082]: E1013 05:36:53.924848 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.924891 kubelet[4082]: W1013 05:36:53.924855 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.924891 kubelet[4082]: E1013 05:36:53.924863 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.926224 kubelet[4082]: E1013 05:36:53.925094 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.926224 kubelet[4082]: W1013 05:36:53.925100 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.926224 kubelet[4082]: E1013 05:36:53.925107 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.926520 kubelet[4082]: E1013 05:36:53.926502 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.926605 kubelet[4082]: W1013 05:36:53.926562 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.926605 kubelet[4082]: E1013 05:36:53.926574 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.926799 kubelet[4082]: E1013 05:36:53.926760 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.926799 kubelet[4082]: W1013 05:36:53.926768 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.926799 kubelet[4082]: E1013 05:36:53.926775 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.927062 kubelet[4082]: E1013 05:36:53.927020 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.927062 kubelet[4082]: W1013 05:36:53.927028 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.927062 kubelet[4082]: E1013 05:36:53.927035 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.927290 kubelet[4082]: E1013 05:36:53.927260 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.927290 kubelet[4082]: W1013 05:36:53.927267 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.927290 kubelet[4082]: E1013 05:36:53.927274 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.927573 kubelet[4082]: E1013 05:36:53.927538 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.927573 kubelet[4082]: W1013 05:36:53.927546 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.927573 kubelet[4082]: E1013 05:36:53.927552 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.927844 kubelet[4082]: E1013 05:36:53.927757 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.927844 kubelet[4082]: W1013 05:36:53.927773 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.927844 kubelet[4082]: E1013 05:36:53.927780 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.929386 kubelet[4082]: E1013 05:36:53.929357 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.929386 kubelet[4082]: W1013 05:36:53.929371 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.929565 kubelet[4082]: E1013 05:36:53.929502 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.929686 kubelet[4082]: E1013 05:36:53.929661 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.929686 kubelet[4082]: W1013 05:36:53.929667 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.929686 kubelet[4082]: E1013 05:36:53.929675 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.929921 kubelet[4082]: E1013 05:36:53.929896 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.929921 kubelet[4082]: W1013 05:36:53.929903 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.930003 kubelet[4082]: E1013 05:36:53.929910 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.930135 kubelet[4082]: E1013 05:36:53.930122 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.930225 kubelet[4082]: W1013 05:36:53.930169 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.930225 kubelet[4082]: E1013 05:36:53.930177 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.930414 kubelet[4082]: E1013 05:36:53.930387 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.930414 kubelet[4082]: W1013 05:36:53.930394 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.930414 kubelet[4082]: E1013 05:36:53.930400 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.930751 kubelet[4082]: E1013 05:36:53.930715 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.930751 kubelet[4082]: W1013 05:36:53.930726 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.930751 kubelet[4082]: E1013 05:36:53.930735 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.931067 kubelet[4082]: E1013 05:36:53.931052 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.931067 kubelet[4082]: W1013 05:36:53.931067 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.931136 kubelet[4082]: E1013 05:36:53.931078 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.932328 kubelet[4082]: E1013 05:36:53.932296 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.932328 kubelet[4082]: W1013 05:36:53.932322 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.932447 kubelet[4082]: E1013 05:36:53.932335 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.932563 kubelet[4082]: E1013 05:36:53.932545 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.932563 kubelet[4082]: W1013 05:36:53.932561 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.932628 kubelet[4082]: E1013 05:36:53.932569 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.932714 kubelet[4082]: E1013 05:36:53.932704 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.932742 kubelet[4082]: W1013 05:36:53.932715 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.932742 kubelet[4082]: E1013 05:36:53.932723 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.932860 kubelet[4082]: E1013 05:36:53.932851 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.932887 kubelet[4082]: W1013 05:36:53.932860 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.932887 kubelet[4082]: E1013 05:36:53.932867 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.932985 kubelet[4082]: E1013 05:36:53.932977 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.933012 kubelet[4082]: W1013 05:36:53.932985 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.933012 kubelet[4082]: E1013 05:36:53.932992 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.933113 kubelet[4082]: E1013 05:36:53.933106 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.933139 kubelet[4082]: W1013 05:36:53.933114 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.933139 kubelet[4082]: E1013 05:36:53.933121 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.933288 kubelet[4082]: E1013 05:36:53.933280 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.933320 kubelet[4082]: W1013 05:36:53.933289 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.933320 kubelet[4082]: E1013 05:36:53.933295 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.933606 kubelet[4082]: E1013 05:36:53.933594 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.933642 kubelet[4082]: W1013 05:36:53.933606 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.933642 kubelet[4082]: E1013 05:36:53.933614 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.933721 kubelet[4082]: E1013 05:36:53.933710 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.933721 kubelet[4082]: W1013 05:36:53.933719 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.933775 kubelet[4082]: E1013 05:36:53.933725 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.933873 kubelet[4082]: E1013 05:36:53.933864 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.933897 kubelet[4082]: W1013 05:36:53.933873 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.933897 kubelet[4082]: E1013 05:36:53.933879 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.934360 kubelet[4082]: E1013 05:36:53.934346 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.934360 kubelet[4082]: W1013 05:36:53.934359 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.934443 kubelet[4082]: E1013 05:36:53.934367 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.934587 kubelet[4082]: E1013 05:36:53.934496 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.934587 kubelet[4082]: W1013 05:36:53.934503 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.934587 kubelet[4082]: E1013 05:36:53.934509 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.934674 kubelet[4082]: E1013 05:36:53.934626 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.934674 kubelet[4082]: W1013 05:36:53.934631 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.934674 kubelet[4082]: E1013 05:36:53.934638 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.935345 kubelet[4082]: E1013 05:36:53.935269 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.935345 kubelet[4082]: W1013 05:36:53.935277 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.935345 kubelet[4082]: E1013 05:36:53.935287 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.935447 kubelet[4082]: E1013 05:36:53.935437 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.935472 kubelet[4082]: W1013 05:36:53.935448 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.935472 kubelet[4082]: E1013 05:36:53.935455 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:53.935839 kubelet[4082]: E1013 05:36:53.935824 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:53.935839 kubelet[4082]: W1013 05:36:53.935838 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:53.935909 kubelet[4082]: E1013 05:36:53.935848 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.868618 kubelet[4082]: I1013 05:36:54.868589 4082 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:36:54.876017 containerd[2611]: time="2025-10-13T05:36:54.875976242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:54.878246 containerd[2611]: time="2025-10-13T05:36:54.878212979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:36:54.880723 containerd[2611]: time="2025-10-13T05:36:54.880667264Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:54.888247 containerd[2611]: time="2025-10-13T05:36:54.888171558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:54.888761 containerd[2611]: time="2025-10-13T05:36:54.888647898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.376248985s" Oct 13 05:36:54.888761 containerd[2611]: time="2025-10-13T05:36:54.888679635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:36:54.894979 containerd[2611]: time="2025-10-13T05:36:54.894950371Z" level=info msg="CreateContainer within sandbox \"15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:36:54.911879 containerd[2611]: time="2025-10-13T05:36:54.908350405Z" level=info msg="Container f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:54.927467 containerd[2611]: time="2025-10-13T05:36:54.927441901Z" level=info msg="CreateContainer within sandbox \"15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e\"" Oct 13 05:36:54.927830 containerd[2611]: time="2025-10-13T05:36:54.927813638Z" level=info msg="StartContainer for \"f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e\"" Oct 13 05:36:54.929532 containerd[2611]: time="2025-10-13T05:36:54.929478747Z" level=info msg="connecting to shim f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e" address="unix:///run/containerd/s/d59bfb3d330d691e0120718a2585c06c5cd810bd70c7b17ba7ca97c36234fb23" protocol=ttrpc version=3 Oct 13 05:36:54.938565 kubelet[4082]: E1013 05:36:54.938546 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.938730 kubelet[4082]: W1013 05:36:54.938656 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.938730 kubelet[4082]: E1013 05:36:54.938680 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.940181 kubelet[4082]: E1013 05:36:54.940094 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.941812 kubelet[4082]: W1013 05:36:54.941780 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.941812 kubelet[4082]: E1013 05:36:54.941810 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.941998 kubelet[4082]: E1013 05:36:54.941981 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.941998 kubelet[4082]: W1013 05:36:54.941991 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.942065 kubelet[4082]: E1013 05:36:54.942003 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.942128 kubelet[4082]: E1013 05:36:54.942118 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.942128 kubelet[4082]: W1013 05:36:54.942126 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.942173 kubelet[4082]: E1013 05:36:54.942133 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.942300 kubelet[4082]: E1013 05:36:54.942290 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.942300 kubelet[4082]: W1013 05:36:54.942299 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.942374 kubelet[4082]: E1013 05:36:54.942306 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.942785 kubelet[4082]: E1013 05:36:54.942774 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.942785 kubelet[4082]: W1013 05:36:54.942783 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.942852 kubelet[4082]: E1013 05:36:54.942791 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.942936 kubelet[4082]: E1013 05:36:54.942926 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.942936 kubelet[4082]: W1013 05:36:54.942934 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.943001 kubelet[4082]: E1013 05:36:54.942941 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.943286 kubelet[4082]: E1013 05:36:54.943275 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.943335 kubelet[4082]: W1013 05:36:54.943287 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.943335 kubelet[4082]: E1013 05:36:54.943297 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.943691 kubelet[4082]: E1013 05:36:54.943455 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.943691 kubelet[4082]: W1013 05:36:54.943462 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.943691 kubelet[4082]: E1013 05:36:54.943470 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.943691 kubelet[4082]: E1013 05:36:54.943563 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.943691 kubelet[4082]: W1013 05:36:54.943567 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.943691 kubelet[4082]: E1013 05:36:54.943652 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.943879 kubelet[4082]: E1013 05:36:54.943760 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.943879 kubelet[4082]: W1013 05:36:54.943765 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.943879 kubelet[4082]: E1013 05:36:54.943786 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.943956 kubelet[4082]: E1013 05:36:54.943903 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.943956 kubelet[4082]: W1013 05:36:54.943908 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.943956 kubelet[4082]: E1013 05:36:54.943917 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.944165 kubelet[4082]: E1013 05:36:54.944070 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.944165 kubelet[4082]: W1013 05:36:54.944077 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.944165 kubelet[4082]: E1013 05:36:54.944085 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.944385 kubelet[4082]: E1013 05:36:54.944250 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.944385 kubelet[4082]: W1013 05:36:54.944256 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.944385 kubelet[4082]: E1013 05:36:54.944265 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.944385 kubelet[4082]: E1013 05:36:54.944364 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.944385 kubelet[4082]: W1013 05:36:54.944370 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.944385 kubelet[4082]: E1013 05:36:54.944376 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.944614 kubelet[4082]: E1013 05:36:54.944600 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.944614 kubelet[4082]: W1013 05:36:54.944607 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.944666 kubelet[4082]: E1013 05:36:54.944615 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.944799 kubelet[4082]: E1013 05:36:54.944791 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.944799 kubelet[4082]: W1013 05:36:54.944798 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.944876 kubelet[4082]: E1013 05:36:54.944805 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.945063 kubelet[4082]: E1013 05:36:54.945007 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.945114 kubelet[4082]: W1013 05:36:54.945064 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.945114 kubelet[4082]: E1013 05:36:54.945073 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.945404 kubelet[4082]: E1013 05:36:54.945245 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.945404 kubelet[4082]: W1013 05:36:54.945253 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.945404 kubelet[4082]: E1013 05:36:54.945261 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.945686 kubelet[4082]: E1013 05:36:54.945507 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.945686 kubelet[4082]: W1013 05:36:54.945514 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.945686 kubelet[4082]: E1013 05:36:54.945522 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.945845 kubelet[4082]: E1013 05:36:54.945713 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.945845 kubelet[4082]: W1013 05:36:54.945720 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.945845 kubelet[4082]: E1013 05:36:54.945727 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.946114 kubelet[4082]: E1013 05:36:54.945852 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.946114 kubelet[4082]: W1013 05:36:54.945858 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.946114 kubelet[4082]: E1013 05:36:54.945865 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.946114 kubelet[4082]: E1013 05:36:54.946088 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.946114 kubelet[4082]: W1013 05:36:54.946095 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.946114 kubelet[4082]: E1013 05:36:54.946104 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.946531 kubelet[4082]: E1013 05:36:54.946420 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.946531 kubelet[4082]: W1013 05:36:54.946428 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.946531 kubelet[4082]: E1013 05:36:54.946438 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.947373 kubelet[4082]: E1013 05:36:54.946731 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.947373 kubelet[4082]: W1013 05:36:54.946740 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.947373 kubelet[4082]: E1013 05:36:54.946749 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.947373 kubelet[4082]: E1013 05:36:54.946892 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.947373 kubelet[4082]: W1013 05:36:54.946898 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.947373 kubelet[4082]: E1013 05:36:54.946906 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.947373 kubelet[4082]: E1013 05:36:54.947038 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.947373 kubelet[4082]: W1013 05:36:54.947042 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.947373 kubelet[4082]: E1013 05:36:54.947049 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.947373 kubelet[4082]: E1013 05:36:54.947157 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.947663 kubelet[4082]: W1013 05:36:54.947162 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.947663 kubelet[4082]: E1013 05:36:54.947168 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.947663 kubelet[4082]: E1013 05:36:54.947310 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.947663 kubelet[4082]: W1013 05:36:54.947315 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.947663 kubelet[4082]: E1013 05:36:54.947322 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.947663 kubelet[4082]: E1013 05:36:54.947585 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.947663 kubelet[4082]: W1013 05:36:54.947591 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.947663 kubelet[4082]: E1013 05:36:54.947598 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.947861 kubelet[4082]: E1013 05:36:54.947696 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.947861 kubelet[4082]: W1013 05:36:54.947701 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.947861 kubelet[4082]: E1013 05:36:54.947707 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.947861 kubelet[4082]: E1013 05:36:54.947822 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.947861 kubelet[4082]: W1013 05:36:54.947826 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.947861 kubelet[4082]: E1013 05:36:54.947832 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.948320 kubelet[4082]: E1013 05:36:54.948189 4082 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:54.948320 kubelet[4082]: W1013 05:36:54.948240 4082 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:54.948320 kubelet[4082]: E1013 05:36:54.948251 4082 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:54.954363 systemd[1]: Started cri-containerd-f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e.scope - libcontainer container f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e. Oct 13 05:36:54.986122 containerd[2611]: time="2025-10-13T05:36:54.986089463Z" level=info msg="StartContainer for \"f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e\" returns successfully" Oct 13 05:36:54.991422 systemd[1]: cri-containerd-f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e.scope: Deactivated successfully. Oct 13 05:36:54.993435 containerd[2611]: time="2025-10-13T05:36:54.993401889Z" level=info msg="received exit event container_id:\"f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e\" id:\"f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e\" pid:4797 exited_at:{seconds:1760333814 nanos:993112779}" Oct 13 05:36:54.993620 containerd[2611]: time="2025-10-13T05:36:54.993598602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e\" id:\"f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e\" pid:4797 exited_at:{seconds:1760333814 nanos:993112779}" Oct 13 05:36:55.011377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7122f9211499572814ac2aff7509c756425def1e063c4a81b30b3b00ba40d5e-rootfs.mount: Deactivated successfully. Oct 13 05:36:55.800320 kubelet[4082]: E1013 05:36:55.799414 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zl54c" podUID="bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f" Oct 13 05:36:55.886557 kubelet[4082]: I1013 05:36:55.886181 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d7746cbd7-ksgt7" podStartSLOduration=2.954266884 podStartE2EDuration="4.886166006s" podCreationTimestamp="2025-10-13 05:36:51 +0000 UTC" firstStartedPulling="2025-10-13 05:36:51.579920705 +0000 UTC m=+18.890350653" lastFinishedPulling="2025-10-13 05:36:53.511819828 +0000 UTC m=+20.822249775" observedRunningTime="2025-10-13 05:36:53.909368709 +0000 UTC m=+21.219798662" watchObservedRunningTime="2025-10-13 05:36:55.886166006 +0000 UTC m=+23.196595949" Oct 13 05:36:57.799814 kubelet[4082]: E1013 05:36:57.799766 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zl54c" podUID="bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f" Oct 13 05:36:57.878072 containerd[2611]: time="2025-10-13T05:36:57.877965653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:36:59.799843 kubelet[4082]: E1013 05:36:59.799799 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zl54c" podUID="bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f" Oct 13 05:37:01.494613 containerd[2611]: time="2025-10-13T05:37:01.494570538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:01.498994 containerd[2611]: time="2025-10-13T05:37:01.498957576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:37:01.504464 containerd[2611]: time="2025-10-13T05:37:01.504414458Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:01.509093 containerd[2611]: time="2025-10-13T05:37:01.508662006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:01.509093 containerd[2611]: time="2025-10-13T05:37:01.509007865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.631002685s" Oct 13 05:37:01.509093 containerd[2611]: time="2025-10-13T05:37:01.509030793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:37:01.520792 containerd[2611]: time="2025-10-13T05:37:01.520765255Z" level=info msg="CreateContainer within sandbox \"15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:37:01.557782 containerd[2611]: time="2025-10-13T05:37:01.555027942Z" level=info msg="Container 8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:01.574483 containerd[2611]: time="2025-10-13T05:37:01.574458572Z" level=info msg="CreateContainer within sandbox \"15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5\"" Oct 13 05:37:01.577107 containerd[2611]: time="2025-10-13T05:37:01.574911071Z" level=info msg="StartContainer for \"8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5\"" Oct 13 05:37:01.581283 containerd[2611]: time="2025-10-13T05:37:01.581251800Z" level=info msg="connecting to shim 8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5" address="unix:///run/containerd/s/d59bfb3d330d691e0120718a2585c06c5cd810bd70c7b17ba7ca97c36234fb23" protocol=ttrpc version=3 Oct 13 05:37:01.606365 systemd[1]: Started cri-containerd-8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5.scope - libcontainer container 8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5. Oct 13 05:37:01.637962 containerd[2611]: time="2025-10-13T05:37:01.637939718Z" level=info msg="StartContainer for \"8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5\" returns successfully" Oct 13 05:37:01.799055 kubelet[4082]: E1013 05:37:01.798949 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zl54c" podUID="bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f" Oct 13 05:37:02.887434 systemd[1]: cri-containerd-8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5.scope: Deactivated successfully. Oct 13 05:37:02.888004 systemd[1]: cri-containerd-8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5.scope: Consumed 424ms CPU time, 197.7M memory peak, 171.3M written to disk. Oct 13 05:37:02.890435 containerd[2611]: time="2025-10-13T05:37:02.890340838Z" level=info msg="received exit event container_id:\"8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5\" id:\"8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5\" pid:4859 exited_at:{seconds:1760333822 nanos:890137789}" Oct 13 05:37:02.890690 containerd[2611]: time="2025-10-13T05:37:02.890562681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5\" id:\"8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5\" pid:4859 exited_at:{seconds:1760333822 nanos:890137789}" Oct 13 05:37:02.904502 kubelet[4082]: I1013 05:37:02.903753 4082 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:37:02.914443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dadec4ad836590c439ef7900302e331f932be5ba03b991e45ea371108ad34d5-rootfs.mount: Deactivated successfully. Oct 13 05:37:03.178608 systemd[1]: Created slice kubepods-burstable-pod3c2c8351_e421_4ee2_bd9d_c438fee1b10f.slice - libcontainer container kubepods-burstable-pod3c2c8351_e421_4ee2_bd9d_c438fee1b10f.slice. Oct 13 05:37:03.308256 kubelet[4082]: I1013 05:37:03.202197 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9868w\" (UniqueName: \"kubernetes.io/projected/3c2c8351-e421-4ee2-bd9d-c438fee1b10f-kube-api-access-9868w\") pod \"coredns-674b8bbfcf-zlppj\" (UID: \"3c2c8351-e421-4ee2-bd9d-c438fee1b10f\") " pod="kube-system/coredns-674b8bbfcf-zlppj" Oct 13 05:37:03.308256 kubelet[4082]: I1013 05:37:03.202305 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c2c8351-e421-4ee2-bd9d-c438fee1b10f-config-volume\") pod \"coredns-674b8bbfcf-zlppj\" (UID: \"3c2c8351-e421-4ee2-bd9d-c438fee1b10f\") " pod="kube-system/coredns-674b8bbfcf-zlppj" Oct 13 05:37:03.367956 systemd[1]: Created slice kubepods-besteffort-pod3d3e3e5b_e8b4_465f_a46c_a3e4fc460834.slice - libcontainer container kubepods-besteffort-pod3d3e3e5b_e8b4_465f_a46c_a3e4fc460834.slice. Oct 13 05:37:03.403553 kubelet[4082]: I1013 05:37:03.403525 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsz9g\" (UniqueName: \"kubernetes.io/projected/3d3e3e5b-e8b4-465f-a46c-a3e4fc460834-kube-api-access-jsz9g\") pod \"calico-apiserver-679f59f4c7-nzrgh\" (UID: \"3d3e3e5b-e8b4-465f-a46c-a3e4fc460834\") " pod="calico-apiserver/calico-apiserver-679f59f4c7-nzrgh" Oct 13 05:37:03.403942 kubelet[4082]: I1013 05:37:03.403739 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3d3e3e5b-e8b4-465f-a46c-a3e4fc460834-calico-apiserver-certs\") pod \"calico-apiserver-679f59f4c7-nzrgh\" (UID: \"3d3e3e5b-e8b4-465f-a46c-a3e4fc460834\") " pod="calico-apiserver/calico-apiserver-679f59f4c7-nzrgh" Oct 13 05:37:03.610001 containerd[2611]: time="2025-10-13T05:37:03.609614175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zlppj,Uid:3c2c8351-e421-4ee2-bd9d-c438fee1b10f,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:03.769818 containerd[2611]: time="2025-10-13T05:37:03.769652638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679f59f4c7-nzrgh,Uid:3d3e3e5b-e8b4-465f-a46c-a3e4fc460834,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:37:03.783360 systemd[1]: Created slice kubepods-besteffort-pod3bdb3ec5_b075_48e0_a6e5_da384dd68586.slice - libcontainer container kubepods-besteffort-pod3bdb3ec5_b075_48e0_a6e5_da384dd68586.slice. Oct 13 05:37:03.837502 systemd[1]: Created slice kubepods-besteffort-pod36f0260e_2ad0_4893_b347_1a72b5811844.slice - libcontainer container kubepods-besteffort-pod36f0260e_2ad0_4893_b347_1a72b5811844.slice. Oct 13 05:37:03.838852 containerd[2611]: time="2025-10-13T05:37:03.838686393Z" level=error msg="Failed to destroy network for sandbox \"4ed1cc45f79dfbefb0b4ce0d3af5b32ba7317a196d3c81e589e032cbb9db975f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.846601 containerd[2611]: time="2025-10-13T05:37:03.846181985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zlppj,Uid:3c2c8351-e421-4ee2-bd9d-c438fee1b10f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed1cc45f79dfbefb0b4ce0d3af5b32ba7317a196d3c81e589e032cbb9db975f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.848005 kubelet[4082]: E1013 05:37:03.846886 4082 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed1cc45f79dfbefb0b4ce0d3af5b32ba7317a196d3c81e589e032cbb9db975f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.848005 kubelet[4082]: E1013 05:37:03.846938 4082 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed1cc45f79dfbefb0b4ce0d3af5b32ba7317a196d3c81e589e032cbb9db975f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zlppj" Oct 13 05:37:03.848005 kubelet[4082]: E1013 05:37:03.846959 4082 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed1cc45f79dfbefb0b4ce0d3af5b32ba7317a196d3c81e589e032cbb9db975f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zlppj" Oct 13 05:37:03.848148 kubelet[4082]: E1013 05:37:03.847004 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zlppj_kube-system(3c2c8351-e421-4ee2-bd9d-c438fee1b10f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zlppj_kube-system(3c2c8351-e421-4ee2-bd9d-c438fee1b10f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ed1cc45f79dfbefb0b4ce0d3af5b32ba7317a196d3c81e589e032cbb9db975f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zlppj" podUID="3c2c8351-e421-4ee2-bd9d-c438fee1b10f" Oct 13 05:37:03.870595 systemd[1]: Created slice kubepods-besteffort-pod03e171aa_6801_4329_80e3_ff8dedaab170.slice - libcontainer container kubepods-besteffort-pod03e171aa_6801_4329_80e3_ff8dedaab170.slice. Oct 13 05:37:03.878435 systemd[1]: Created slice kubepods-burstable-podc6cf30b2_c7a0_46c2_9758_3e0cae4f8800.slice - libcontainer container kubepods-burstable-podc6cf30b2_c7a0_46c2_9758_3e0cae4f8800.slice. Oct 13 05:37:03.883246 containerd[2611]: time="2025-10-13T05:37:03.882846755Z" level=error msg="Failed to destroy network for sandbox \"631551eef644a8c394d5a6ea9c65464b056d90dc70defd7670bce5b147596d89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.886266 containerd[2611]: time="2025-10-13T05:37:03.886158227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679f59f4c7-nzrgh,Uid:3d3e3e5b-e8b4-465f-a46c-a3e4fc460834,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"631551eef644a8c394d5a6ea9c65464b056d90dc70defd7670bce5b147596d89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.892037 kubelet[4082]: E1013 05:37:03.889766 4082 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631551eef644a8c394d5a6ea9c65464b056d90dc70defd7670bce5b147596d89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.891673 systemd[1]: Created slice kubepods-besteffort-podbcbf5018_3ffe_4d13_9b7d_1211daeb0d5f.slice - libcontainer container kubepods-besteffort-podbcbf5018_3ffe_4d13_9b7d_1211daeb0d5f.slice. Oct 13 05:37:03.893809 kubelet[4082]: E1013 05:37:03.893768 4082 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631551eef644a8c394d5a6ea9c65464b056d90dc70defd7670bce5b147596d89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679f59f4c7-nzrgh" Oct 13 05:37:03.893870 kubelet[4082]: E1013 05:37:03.893823 4082 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631551eef644a8c394d5a6ea9c65464b056d90dc70defd7670bce5b147596d89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679f59f4c7-nzrgh" Oct 13 05:37:03.893921 kubelet[4082]: E1013 05:37:03.893894 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-679f59f4c7-nzrgh_calico-apiserver(3d3e3e5b-e8b4-465f-a46c-a3e4fc460834)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-679f59f4c7-nzrgh_calico-apiserver(3d3e3e5b-e8b4-465f-a46c-a3e4fc460834)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"631551eef644a8c394d5a6ea9c65464b056d90dc70defd7670bce5b147596d89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-679f59f4c7-nzrgh" podUID="3d3e3e5b-e8b4-465f-a46c-a3e4fc460834" Oct 13 05:37:03.904796 containerd[2611]: time="2025-10-13T05:37:03.903835773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zl54c,Uid:bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:03.908786 systemd[1]: Created slice kubepods-besteffort-pod93f709b6_d5d3_4324_9bd3_4718259e1d38.slice - libcontainer container kubepods-besteffort-pod93f709b6_d5d3_4324_9bd3_4718259e1d38.slice. Oct 13 05:37:03.912406 kubelet[4082]: I1013 05:37:03.912385 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f0260e-2ad0-4893-b347-1a72b5811844-config\") pod \"goldmane-54d579b49d-rjkcm\" (UID: \"36f0260e-2ad0-4893-b347-1a72b5811844\") " pod="calico-system/goldmane-54d579b49d-rjkcm" Oct 13 05:37:03.912842 kubelet[4082]: I1013 05:37:03.912686 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/03e171aa-6801-4329-80e3-ff8dedaab170-calico-apiserver-certs\") pod \"calico-apiserver-679f59f4c7-gm94f\" (UID: \"03e171aa-6801-4329-80e3-ff8dedaab170\") " pod="calico-apiserver/calico-apiserver-679f59f4c7-gm94f" Oct 13 05:37:03.913152 kubelet[4082]: I1013 05:37:03.912929 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bdb3ec5-b075-48e0-a6e5-da384dd68586-whisker-ca-bundle\") pod \"whisker-6c9746f68b-74tj9\" (UID: \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\") " pod="calico-system/whisker-6c9746f68b-74tj9" Oct 13 05:37:03.913264 kubelet[4082]: I1013 05:37:03.913252 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36f0260e-2ad0-4893-b347-1a72b5811844-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-rjkcm\" (UID: \"36f0260e-2ad0-4893-b347-1a72b5811844\") " pod="calico-system/goldmane-54d579b49d-rjkcm" Oct 13 05:37:03.913524 kubelet[4082]: I1013 05:37:03.913356 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plg7z\" (UniqueName: \"kubernetes.io/projected/93f709b6-d5d3-4324-9bd3-4718259e1d38-kube-api-access-plg7z\") pod \"calico-kube-controllers-86b98dd68c-2ngs9\" (UID: \"93f709b6-d5d3-4324-9bd3-4718259e1d38\") " pod="calico-system/calico-kube-controllers-86b98dd68c-2ngs9" Oct 13 05:37:03.913618 kubelet[4082]: I1013 05:37:03.913607 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3bdb3ec5-b075-48e0-a6e5-da384dd68586-whisker-backend-key-pair\") pod \"whisker-6c9746f68b-74tj9\" (UID: \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\") " pod="calico-system/whisker-6c9746f68b-74tj9" Oct 13 05:37:03.913773 kubelet[4082]: I1013 05:37:03.913738 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxb5f\" (UniqueName: \"kubernetes.io/projected/3bdb3ec5-b075-48e0-a6e5-da384dd68586-kube-api-access-dxb5f\") pod \"whisker-6c9746f68b-74tj9\" (UID: \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\") " pod="calico-system/whisker-6c9746f68b-74tj9" Oct 13 05:37:03.913914 kubelet[4082]: I1013 05:37:03.913903 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6v67\" (UniqueName: \"kubernetes.io/projected/c6cf30b2-c7a0-46c2-9758-3e0cae4f8800-kube-api-access-s6v67\") pod \"coredns-674b8bbfcf-fff2d\" (UID: \"c6cf30b2-c7a0-46c2-9758-3e0cae4f8800\") " pod="kube-system/coredns-674b8bbfcf-fff2d" Oct 13 05:37:03.914055 kubelet[4082]: I1013 05:37:03.914004 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5jz\" (UniqueName: \"kubernetes.io/projected/03e171aa-6801-4329-80e3-ff8dedaab170-kube-api-access-kt5jz\") pod \"calico-apiserver-679f59f4c7-gm94f\" (UID: \"03e171aa-6801-4329-80e3-ff8dedaab170\") " pod="calico-apiserver/calico-apiserver-679f59f4c7-gm94f" Oct 13 05:37:03.914265 kubelet[4082]: I1013 05:37:03.914253 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/36f0260e-2ad0-4893-b347-1a72b5811844-goldmane-key-pair\") pod \"goldmane-54d579b49d-rjkcm\" (UID: \"36f0260e-2ad0-4893-b347-1a72b5811844\") " pod="calico-system/goldmane-54d579b49d-rjkcm" Oct 13 05:37:03.914494 kubelet[4082]: I1013 05:37:03.914483 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93f709b6-d5d3-4324-9bd3-4718259e1d38-tigera-ca-bundle\") pod \"calico-kube-controllers-86b98dd68c-2ngs9\" (UID: \"93f709b6-d5d3-4324-9bd3-4718259e1d38\") " pod="calico-system/calico-kube-controllers-86b98dd68c-2ngs9" Oct 13 05:37:03.914668 kubelet[4082]: I1013 05:37:03.914559 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6cf30b2-c7a0-46c2-9758-3e0cae4f8800-config-volume\") pod \"coredns-674b8bbfcf-fff2d\" (UID: \"c6cf30b2-c7a0-46c2-9758-3e0cae4f8800\") " pod="kube-system/coredns-674b8bbfcf-fff2d" Oct 13 05:37:03.914810 kubelet[4082]: I1013 05:37:03.914744 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdz8k\" (UniqueName: \"kubernetes.io/projected/36f0260e-2ad0-4893-b347-1a72b5811844-kube-api-access-gdz8k\") pod \"goldmane-54d579b49d-rjkcm\" (UID: \"36f0260e-2ad0-4893-b347-1a72b5811844\") " pod="calico-system/goldmane-54d579b49d-rjkcm" Oct 13 05:37:03.918140 systemd[1]: run-netns-cni\x2d89c60a33\x2d201a\x2d8a05\x2d1f90\x2d588bf2865ffa.mount: Deactivated successfully. Oct 13 05:37:03.941393 containerd[2611]: time="2025-10-13T05:37:03.941320877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:37:03.984237 containerd[2611]: time="2025-10-13T05:37:03.984173996Z" level=error msg="Failed to destroy network for sandbox \"f38fd4ae592d82e328888b477ff4f06afc2cba66868e3e6fc22f69935474fd9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.986746 systemd[1]: run-netns-cni\x2d54dac56f\x2d9c71\x2dee9c\x2ddbfd\x2defface79ed0b.mount: Deactivated successfully. Oct 13 05:37:03.988661 containerd[2611]: time="2025-10-13T05:37:03.988626691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zl54c,Uid:bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38fd4ae592d82e328888b477ff4f06afc2cba66868e3e6fc22f69935474fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.988914 kubelet[4082]: E1013 05:37:03.988870 4082 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38fd4ae592d82e328888b477ff4f06afc2cba66868e3e6fc22f69935474fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:03.988980 kubelet[4082]: E1013 05:37:03.988926 4082 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38fd4ae592d82e328888b477ff4f06afc2cba66868e3e6fc22f69935474fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zl54c" Oct 13 05:37:03.988980 kubelet[4082]: E1013 05:37:03.988951 4082 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38fd4ae592d82e328888b477ff4f06afc2cba66868e3e6fc22f69935474fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zl54c" Oct 13 05:37:03.989033 kubelet[4082]: E1013 05:37:03.988992 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zl54c_calico-system(bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zl54c_calico-system(bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f38fd4ae592d82e328888b477ff4f06afc2cba66868e3e6fc22f69935474fd9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zl54c" podUID="bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f" Oct 13 05:37:04.086983 containerd[2611]: time="2025-10-13T05:37:04.086952144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9746f68b-74tj9,Uid:3bdb3ec5-b075-48e0-a6e5-da384dd68586,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:04.135180 containerd[2611]: time="2025-10-13T05:37:04.135033487Z" level=error msg="Failed to destroy network for sandbox \"0395c9f7887ecd74b299022386b46d3fee3d760d89b060203e4eb2bdb815f2c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.138891 containerd[2611]: time="2025-10-13T05:37:04.138863862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9746f68b-74tj9,Uid:3bdb3ec5-b075-48e0-a6e5-da384dd68586,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0395c9f7887ecd74b299022386b46d3fee3d760d89b060203e4eb2bdb815f2c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.139132 kubelet[4082]: E1013 05:37:04.139103 4082 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0395c9f7887ecd74b299022386b46d3fee3d760d89b060203e4eb2bdb815f2c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.139190 kubelet[4082]: E1013 05:37:04.139161 4082 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0395c9f7887ecd74b299022386b46d3fee3d760d89b060203e4eb2bdb815f2c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c9746f68b-74tj9" Oct 13 05:37:04.139190 kubelet[4082]: E1013 05:37:04.139182 4082 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0395c9f7887ecd74b299022386b46d3fee3d760d89b060203e4eb2bdb815f2c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c9746f68b-74tj9" Oct 13 05:37:04.139273 kubelet[4082]: E1013 05:37:04.139251 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c9746f68b-74tj9_calico-system(3bdb3ec5-b075-48e0-a6e5-da384dd68586)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c9746f68b-74tj9_calico-system(3bdb3ec5-b075-48e0-a6e5-da384dd68586)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0395c9f7887ecd74b299022386b46d3fee3d760d89b060203e4eb2bdb815f2c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c9746f68b-74tj9" podUID="3bdb3ec5-b075-48e0-a6e5-da384dd68586" Oct 13 05:37:04.161891 containerd[2611]: time="2025-10-13T05:37:04.161863400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rjkcm,Uid:36f0260e-2ad0-4893-b347-1a72b5811844,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:04.194010 containerd[2611]: time="2025-10-13T05:37:04.193831042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679f59f4c7-gm94f,Uid:03e171aa-6801-4329-80e3-ff8dedaab170,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:37:04.194010 containerd[2611]: time="2025-10-13T05:37:04.193882311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fff2d,Uid:c6cf30b2-c7a0-46c2-9758-3e0cae4f8800,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:04.204883 containerd[2611]: time="2025-10-13T05:37:04.204849765Z" level=error msg="Failed to destroy network for sandbox \"79a14bd39c68829350f2cd7be1d277f51359831251229d182a4e6b22af479380\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.229195 containerd[2611]: time="2025-10-13T05:37:04.229021256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rjkcm,Uid:36f0260e-2ad0-4893-b347-1a72b5811844,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a14bd39c68829350f2cd7be1d277f51359831251229d182a4e6b22af479380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.230057 containerd[2611]: time="2025-10-13T05:37:04.229999107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b98dd68c-2ngs9,Uid:93f709b6-d5d3-4324-9bd3-4718259e1d38,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:04.230411 kubelet[4082]: E1013 05:37:04.230370 4082 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a14bd39c68829350f2cd7be1d277f51359831251229d182a4e6b22af479380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.230641 kubelet[4082]: E1013 05:37:04.230543 4082 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a14bd39c68829350f2cd7be1d277f51359831251229d182a4e6b22af479380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rjkcm" Oct 13 05:37:04.230641 kubelet[4082]: E1013 05:37:04.230592 4082 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a14bd39c68829350f2cd7be1d277f51359831251229d182a4e6b22af479380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rjkcm" Oct 13 05:37:04.230761 kubelet[4082]: E1013 05:37:04.230740 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-rjkcm_calico-system(36f0260e-2ad0-4893-b347-1a72b5811844)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-rjkcm_calico-system(36f0260e-2ad0-4893-b347-1a72b5811844)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79a14bd39c68829350f2cd7be1d277f51359831251229d182a4e6b22af479380\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-rjkcm" podUID="36f0260e-2ad0-4893-b347-1a72b5811844" Oct 13 05:37:04.276809 containerd[2611]: time="2025-10-13T05:37:04.276733870Z" level=error msg="Failed to destroy network for sandbox \"8d06a6fff0915ab43f5defb8c5bcba751e6fb5981d71c7782f6ae6e2881f8730\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.283151 containerd[2611]: time="2025-10-13T05:37:04.283110250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fff2d,Uid:c6cf30b2-c7a0-46c2-9758-3e0cae4f8800,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d06a6fff0915ab43f5defb8c5bcba751e6fb5981d71c7782f6ae6e2881f8730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.283417 kubelet[4082]: E1013 05:37:04.283389 4082 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d06a6fff0915ab43f5defb8c5bcba751e6fb5981d71c7782f6ae6e2881f8730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.283490 kubelet[4082]: E1013 05:37:04.283439 4082 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d06a6fff0915ab43f5defb8c5bcba751e6fb5981d71c7782f6ae6e2881f8730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fff2d" Oct 13 05:37:04.283490 kubelet[4082]: E1013 05:37:04.283457 4082 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d06a6fff0915ab43f5defb8c5bcba751e6fb5981d71c7782f6ae6e2881f8730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fff2d" Oct 13 05:37:04.283568 kubelet[4082]: E1013 05:37:04.283520 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fff2d_kube-system(c6cf30b2-c7a0-46c2-9758-3e0cae4f8800)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fff2d_kube-system(c6cf30b2-c7a0-46c2-9758-3e0cae4f8800)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d06a6fff0915ab43f5defb8c5bcba751e6fb5981d71c7782f6ae6e2881f8730\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fff2d" podUID="c6cf30b2-c7a0-46c2-9758-3e0cae4f8800" Oct 13 05:37:04.295843 containerd[2611]: time="2025-10-13T05:37:04.295746249Z" level=error msg="Failed to destroy network for sandbox \"150906fb4aef2d62fb323317f581000655f07e23c4420aa2aa586d4626dec0e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.300222 containerd[2611]: time="2025-10-13T05:37:04.300165775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679f59f4c7-gm94f,Uid:03e171aa-6801-4329-80e3-ff8dedaab170,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"150906fb4aef2d62fb323317f581000655f07e23c4420aa2aa586d4626dec0e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.300998 kubelet[4082]: E1013 05:37:04.300348 4082 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150906fb4aef2d62fb323317f581000655f07e23c4420aa2aa586d4626dec0e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.300998 kubelet[4082]: E1013 05:37:04.300403 4082 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150906fb4aef2d62fb323317f581000655f07e23c4420aa2aa586d4626dec0e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679f59f4c7-gm94f" Oct 13 05:37:04.300998 kubelet[4082]: E1013 05:37:04.300421 4082 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150906fb4aef2d62fb323317f581000655f07e23c4420aa2aa586d4626dec0e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679f59f4c7-gm94f" Oct 13 05:37:04.301099 kubelet[4082]: E1013 05:37:04.300477 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-679f59f4c7-gm94f_calico-apiserver(03e171aa-6801-4329-80e3-ff8dedaab170)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-679f59f4c7-gm94f_calico-apiserver(03e171aa-6801-4329-80e3-ff8dedaab170)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"150906fb4aef2d62fb323317f581000655f07e23c4420aa2aa586d4626dec0e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-679f59f4c7-gm94f" podUID="03e171aa-6801-4329-80e3-ff8dedaab170" Oct 13 05:37:04.310031 containerd[2611]: time="2025-10-13T05:37:04.310004002Z" level=error msg="Failed to destroy network for sandbox \"00d4dddcfad1b1edb5267281c1cf43726be3829f7095e16b57fc68b5598ceeb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.314655 containerd[2611]: time="2025-10-13T05:37:04.314623591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b98dd68c-2ngs9,Uid:93f709b6-d5d3-4324-9bd3-4718259e1d38,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d4dddcfad1b1edb5267281c1cf43726be3829f7095e16b57fc68b5598ceeb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.314864 kubelet[4082]: E1013 05:37:04.314842 4082 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d4dddcfad1b1edb5267281c1cf43726be3829f7095e16b57fc68b5598ceeb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:04.314911 kubelet[4082]: E1013 05:37:04.314877 4082 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d4dddcfad1b1edb5267281c1cf43726be3829f7095e16b57fc68b5598ceeb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86b98dd68c-2ngs9" Oct 13 05:37:04.314963 kubelet[4082]: E1013 05:37:04.314948 4082 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d4dddcfad1b1edb5267281c1cf43726be3829f7095e16b57fc68b5598ceeb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86b98dd68c-2ngs9" Oct 13 05:37:04.315039 kubelet[4082]: E1013 05:37:04.315014 4082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86b98dd68c-2ngs9_calico-system(93f709b6-d5d3-4324-9bd3-4718259e1d38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86b98dd68c-2ngs9_calico-system(93f709b6-d5d3-4324-9bd3-4718259e1d38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00d4dddcfad1b1edb5267281c1cf43726be3829f7095e16b57fc68b5598ceeb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86b98dd68c-2ngs9" podUID="93f709b6-d5d3-4324-9bd3-4718259e1d38" Oct 13 05:37:10.855565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519983635.mount: Deactivated successfully. Oct 13 05:37:10.886738 containerd[2611]: time="2025-10-13T05:37:10.886693134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:10.889583 containerd[2611]: time="2025-10-13T05:37:10.889545374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:37:10.892399 containerd[2611]: time="2025-10-13T05:37:10.892357636Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:10.896779 containerd[2611]: time="2025-10-13T05:37:10.896721268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:10.897401 containerd[2611]: time="2025-10-13T05:37:10.897093449Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.955742394s" Oct 13 05:37:10.897401 containerd[2611]: time="2025-10-13T05:37:10.897127666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:37:10.922054 containerd[2611]: time="2025-10-13T05:37:10.922027571Z" level=info msg="CreateContainer within sandbox \"15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:37:10.944569 containerd[2611]: time="2025-10-13T05:37:10.944536053Z" level=info msg="Container 07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:10.962808 containerd[2611]: time="2025-10-13T05:37:10.962774425Z" level=info msg="CreateContainer within sandbox \"15999be2cb539b1faef008ab19d7ec541243bce80722a9808e5527c4ae07e368\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\"" Oct 13 05:37:10.963305 containerd[2611]: time="2025-10-13T05:37:10.963239619Z" level=info msg="StartContainer for \"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\"" Oct 13 05:37:10.964971 containerd[2611]: time="2025-10-13T05:37:10.964934702Z" level=info msg="connecting to shim 07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf" address="unix:///run/containerd/s/d59bfb3d330d691e0120718a2585c06c5cd810bd70c7b17ba7ca97c36234fb23" protocol=ttrpc version=3 Oct 13 05:37:10.986344 systemd[1]: Started cri-containerd-07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf.scope - libcontainer container 07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf. Oct 13 05:37:11.022368 containerd[2611]: time="2025-10-13T05:37:11.022329988Z" level=info msg="StartContainer for \"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\" returns successfully" Oct 13 05:37:11.424940 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:37:11.425068 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:37:11.662541 kubelet[4082]: I1013 05:37:11.662500 4082 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bdb3ec5-b075-48e0-a6e5-da384dd68586-whisker-ca-bundle\") pod \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\" (UID: \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\") " Oct 13 05:37:11.662541 kubelet[4082]: I1013 05:37:11.662543 4082 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxb5f\" (UniqueName: \"kubernetes.io/projected/3bdb3ec5-b075-48e0-a6e5-da384dd68586-kube-api-access-dxb5f\") pod \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\" (UID: \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\") " Oct 13 05:37:11.665128 kubelet[4082]: I1013 05:37:11.662580 4082 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3bdb3ec5-b075-48e0-a6e5-da384dd68586-whisker-backend-key-pair\") pod \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\" (UID: \"3bdb3ec5-b075-48e0-a6e5-da384dd68586\") " Oct 13 05:37:11.665128 kubelet[4082]: I1013 05:37:11.663107 4082 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bdb3ec5-b075-48e0-a6e5-da384dd68586-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3bdb3ec5-b075-48e0-a6e5-da384dd68586" (UID: "3bdb3ec5-b075-48e0-a6e5-da384dd68586"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:37:11.666498 kubelet[4082]: I1013 05:37:11.666419 4082 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdb3ec5-b075-48e0-a6e5-da384dd68586-kube-api-access-dxb5f" (OuterVolumeSpecName: "kube-api-access-dxb5f") pod "3bdb3ec5-b075-48e0-a6e5-da384dd68586" (UID: "3bdb3ec5-b075-48e0-a6e5-da384dd68586"). InnerVolumeSpecName "kube-api-access-dxb5f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:37:11.668133 kubelet[4082]: I1013 05:37:11.668103 4082 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdb3ec5-b075-48e0-a6e5-da384dd68586-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3bdb3ec5-b075-48e0-a6e5-da384dd68586" (UID: "3bdb3ec5-b075-48e0-a6e5-da384dd68586"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:37:11.763864 kubelet[4082]: I1013 05:37:11.763781 4082 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3bdb3ec5-b075-48e0-a6e5-da384dd68586-whisker-backend-key-pair\") on node \"ci-4487.0.0-a-8f52350bac\" DevicePath \"\"" Oct 13 05:37:11.763864 kubelet[4082]: I1013 05:37:11.763805 4082 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bdb3ec5-b075-48e0-a6e5-da384dd68586-whisker-ca-bundle\") on node \"ci-4487.0.0-a-8f52350bac\" DevicePath \"\"" Oct 13 05:37:11.763864 kubelet[4082]: I1013 05:37:11.763815 4082 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dxb5f\" (UniqueName: \"kubernetes.io/projected/3bdb3ec5-b075-48e0-a6e5-da384dd68586-kube-api-access-dxb5f\") on node \"ci-4487.0.0-a-8f52350bac\" DevicePath \"\"" Oct 13 05:37:11.855063 systemd[1]: var-lib-kubelet-pods-3bdb3ec5\x2db075\x2d48e0\x2da6e5\x2dda384dd68586-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxb5f.mount: Deactivated successfully. Oct 13 05:37:11.855176 systemd[1]: var-lib-kubelet-pods-3bdb3ec5\x2db075\x2d48e0\x2da6e5\x2dda384dd68586-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:37:11.964372 systemd[1]: Removed slice kubepods-besteffort-pod3bdb3ec5_b075_48e0_a6e5_da384dd68586.slice - libcontainer container kubepods-besteffort-pod3bdb3ec5_b075_48e0_a6e5_da384dd68586.slice. Oct 13 05:37:11.976685 kubelet[4082]: I1013 05:37:11.976188 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xqksf" podStartSLOduration=1.996037429 podStartE2EDuration="20.976172022s" podCreationTimestamp="2025-10-13 05:36:51 +0000 UTC" firstStartedPulling="2025-10-13 05:36:51.917786112 +0000 UTC m=+19.228216081" lastFinishedPulling="2025-10-13 05:37:10.897920716 +0000 UTC m=+38.208350674" observedRunningTime="2025-10-13 05:37:11.975881027 +0000 UTC m=+39.286310979" watchObservedRunningTime="2025-10-13 05:37:11.976172022 +0000 UTC m=+39.286601974" Oct 13 05:37:12.047618 systemd[1]: Created slice kubepods-besteffort-pod37c4d886_169b_4a3a_907c_49cde1e8da10.slice - libcontainer container kubepods-besteffort-pod37c4d886_169b_4a3a_907c_49cde1e8da10.slice. Oct 13 05:37:12.065019 kubelet[4082]: I1013 05:37:12.064997 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37c4d886-169b-4a3a-907c-49cde1e8da10-whisker-backend-key-pair\") pod \"whisker-676db78f7-mlnzh\" (UID: \"37c4d886-169b-4a3a-907c-49cde1e8da10\") " pod="calico-system/whisker-676db78f7-mlnzh" Oct 13 05:37:12.065523 kubelet[4082]: I1013 05:37:12.065033 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37c4d886-169b-4a3a-907c-49cde1e8da10-whisker-ca-bundle\") pod \"whisker-676db78f7-mlnzh\" (UID: \"37c4d886-169b-4a3a-907c-49cde1e8da10\") " pod="calico-system/whisker-676db78f7-mlnzh" Oct 13 05:37:12.065523 kubelet[4082]: I1013 05:37:12.065056 4082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5jd\" (UniqueName: \"kubernetes.io/projected/37c4d886-169b-4a3a-907c-49cde1e8da10-kube-api-access-hr5jd\") pod \"whisker-676db78f7-mlnzh\" (UID: \"37c4d886-169b-4a3a-907c-49cde1e8da10\") " pod="calico-system/whisker-676db78f7-mlnzh" Oct 13 05:37:12.351875 containerd[2611]: time="2025-10-13T05:37:12.351778213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-676db78f7-mlnzh,Uid:37c4d886-169b-4a3a-907c-49cde1e8da10,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:12.466042 systemd-networkd[2245]: cali444682add41: Link UP Oct 13 05:37:12.466266 systemd-networkd[2245]: cali444682add41: Gained carrier Oct 13 05:37:12.480067 containerd[2611]: 2025-10-13 05:37:12.380 [INFO][5187] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:37:12.480067 containerd[2611]: 2025-10-13 05:37:12.388 [INFO][5187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0 whisker-676db78f7- calico-system 37c4d886-169b-4a3a-907c-49cde1e8da10 882 0 2025-10-13 05:37:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:676db78f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.0-a-8f52350bac whisker-676db78f7-mlnzh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali444682add41 [] [] }} ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Namespace="calico-system" Pod="whisker-676db78f7-mlnzh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-" Oct 13 05:37:12.480067 containerd[2611]: 2025-10-13 05:37:12.388 [INFO][5187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Namespace="calico-system" Pod="whisker-676db78f7-mlnzh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" Oct 13 05:37:12.480067 containerd[2611]: 2025-10-13 05:37:12.409 [INFO][5199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" HandleID="k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Workload="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.409 [INFO][5199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" HandleID="k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Workload="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-8f52350bac", "pod":"whisker-676db78f7-mlnzh", "timestamp":"2025-10-13 05:37:12.40929471 +0000 UTC"}, Hostname:"ci-4487.0.0-a-8f52350bac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.409 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.409 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.409 [INFO][5199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-8f52350bac' Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.415 [INFO][5199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.418 [INFO][5199] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.421 [INFO][5199] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.422 [INFO][5199] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480312 containerd[2611]: 2025-10-13 05:37:12.424 [INFO][5199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480524 containerd[2611]: 2025-10-13 05:37:12.424 [INFO][5199] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480524 containerd[2611]: 2025-10-13 05:37:12.425 [INFO][5199] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f Oct 13 05:37:12.480524 containerd[2611]: 2025-10-13 05:37:12.429 [INFO][5199] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480524 containerd[2611]: 2025-10-13 05:37:12.438 [INFO][5199] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.193/26] block=192.168.125.192/26 handle="k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480524 containerd[2611]: 2025-10-13 05:37:12.438 [INFO][5199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.193/26] handle="k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:12.480524 containerd[2611]: 2025-10-13 05:37:12.438 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:12.480524 containerd[2611]: 2025-10-13 05:37:12.438 [INFO][5199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.193/26] IPv6=[] ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" HandleID="k8s-pod-network.210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Workload="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" Oct 13 05:37:12.480681 containerd[2611]: 2025-10-13 05:37:12.441 [INFO][5187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Namespace="calico-system" Pod="whisker-676db78f7-mlnzh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0", GenerateName:"whisker-676db78f7-", Namespace:"calico-system", SelfLink:"", UID:"37c4d886-169b-4a3a-907c-49cde1e8da10", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 37, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"676db78f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"", Pod:"whisker-676db78f7-mlnzh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali444682add41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:12.480681 containerd[2611]: 2025-10-13 05:37:12.441 [INFO][5187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.193/32] ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Namespace="calico-system" Pod="whisker-676db78f7-mlnzh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" Oct 13 05:37:12.480760 containerd[2611]: 2025-10-13 05:37:12.441 [INFO][5187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali444682add41 ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Namespace="calico-system" Pod="whisker-676db78f7-mlnzh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" Oct 13 05:37:12.480760 containerd[2611]: 2025-10-13 05:37:12.466 [INFO][5187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Namespace="calico-system" Pod="whisker-676db78f7-mlnzh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" Oct 13 05:37:12.480806 containerd[2611]: 2025-10-13 05:37:12.466 [INFO][5187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Namespace="calico-system" Pod="whisker-676db78f7-mlnzh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0", GenerateName:"whisker-676db78f7-", Namespace:"calico-system", SelfLink:"", UID:"37c4d886-169b-4a3a-907c-49cde1e8da10", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 37, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"676db78f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f", Pod:"whisker-676db78f7-mlnzh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali444682add41", MAC:"46:ff:16:b5:be:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:12.480863 containerd[2611]: 2025-10-13 05:37:12.477 [INFO][5187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" Namespace="calico-system" Pod="whisker-676db78f7-mlnzh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-whisker--676db78f7--mlnzh-eth0" Oct 13 05:37:12.518359 containerd[2611]: time="2025-10-13T05:37:12.518240760Z" level=info msg="connecting to shim 210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f" address="unix:///run/containerd/s/11f64fed64587e4af9fb32865745a24bb9032ce7f12dc832a9348af6b282663e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:12.536359 systemd[1]: Started cri-containerd-210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f.scope - libcontainer container 210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f. Oct 13 05:37:12.579498 containerd[2611]: time="2025-10-13T05:37:12.579463676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-676db78f7-mlnzh,Uid:37c4d886-169b-4a3a-907c-49cde1e8da10,Namespace:calico-system,Attempt:0,} returns sandbox id \"210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f\"" Oct 13 05:37:12.580982 containerd[2611]: time="2025-10-13T05:37:12.580936643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:37:12.801132 kubelet[4082]: I1013 05:37:12.801095 4082 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdb3ec5-b075-48e0-a6e5-da384dd68586" path="/var/lib/kubelet/pods/3bdb3ec5-b075-48e0-a6e5-da384dd68586/volumes" Oct 13 05:37:13.435162 systemd-networkd[2245]: vxlan.calico: Link UP Oct 13 05:37:13.435172 systemd-networkd[2245]: vxlan.calico: Gained carrier Oct 13 05:37:13.533314 systemd-networkd[2245]: cali444682add41: Gained IPv6LL Oct 13 05:37:14.027167 containerd[2611]: time="2025-10-13T05:37:14.027127827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:14.030531 containerd[2611]: time="2025-10-13T05:37:14.030437196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:37:14.033196 containerd[2611]: time="2025-10-13T05:37:14.033171492Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:14.037235 containerd[2611]: time="2025-10-13T05:37:14.037132008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:14.037647 containerd[2611]: time="2025-10-13T05:37:14.037622749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.456655478s" Oct 13 05:37:14.037726 containerd[2611]: time="2025-10-13T05:37:14.037712570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:37:14.044033 containerd[2611]: time="2025-10-13T05:37:14.044005499Z" level=info msg="CreateContainer within sandbox \"210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:37:14.071943 containerd[2611]: time="2025-10-13T05:37:14.070339862Z" level=info msg="Container f28f4019e230acfc3c4ad1463bdd385f1e9bacdba77f2bb16b6df0ed328381c6: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:14.086734 containerd[2611]: time="2025-10-13T05:37:14.086705646Z" level=info msg="CreateContainer within sandbox \"210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f28f4019e230acfc3c4ad1463bdd385f1e9bacdba77f2bb16b6df0ed328381c6\"" Oct 13 05:37:14.087231 containerd[2611]: time="2025-10-13T05:37:14.087122697Z" level=info msg="StartContainer for \"f28f4019e230acfc3c4ad1463bdd385f1e9bacdba77f2bb16b6df0ed328381c6\"" Oct 13 05:37:14.088252 containerd[2611]: time="2025-10-13T05:37:14.088189954Z" level=info msg="connecting to shim f28f4019e230acfc3c4ad1463bdd385f1e9bacdba77f2bb16b6df0ed328381c6" address="unix:///run/containerd/s/11f64fed64587e4af9fb32865745a24bb9032ce7f12dc832a9348af6b282663e" protocol=ttrpc version=3 Oct 13 05:37:14.112357 systemd[1]: Started cri-containerd-f28f4019e230acfc3c4ad1463bdd385f1e9bacdba77f2bb16b6df0ed328381c6.scope - libcontainer container f28f4019e230acfc3c4ad1463bdd385f1e9bacdba77f2bb16b6df0ed328381c6. Oct 13 05:37:14.157018 containerd[2611]: time="2025-10-13T05:37:14.156975666Z" level=info msg="StartContainer for \"f28f4019e230acfc3c4ad1463bdd385f1e9bacdba77f2bb16b6df0ed328381c6\" returns successfully" Oct 13 05:37:14.159451 containerd[2611]: time="2025-10-13T05:37:14.159393004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:37:14.493408 systemd-networkd[2245]: vxlan.calico: Gained IPv6LL Oct 13 05:37:15.799758 containerd[2611]: time="2025-10-13T05:37:15.799682616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679f59f4c7-nzrgh,Uid:3d3e3e5b-e8b4-465f-a46c-a3e4fc460834,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:37:15.800316 containerd[2611]: time="2025-10-13T05:37:15.799682614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rjkcm,Uid:36f0260e-2ad0-4893-b347-1a72b5811844,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:15.930373 systemd-networkd[2245]: califdb0794998c: Link UP Oct 13 05:37:15.930677 systemd-networkd[2245]: califdb0794998c: Gained carrier Oct 13 05:37:15.947890 containerd[2611]: 2025-10-13 05:37:15.849 [INFO][5493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0 calico-apiserver-679f59f4c7- calico-apiserver 3d3e3e5b-e8b4-465f-a46c-a3e4fc460834 805 0 2025-10-13 05:36:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:679f59f4c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-a-8f52350bac calico-apiserver-679f59f4c7-nzrgh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califdb0794998c [] [] }} ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-nzrgh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-" Oct 13 05:37:15.947890 containerd[2611]: 2025-10-13 05:37:15.849 [INFO][5493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-nzrgh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" Oct 13 05:37:15.947890 containerd[2611]: 2025-10-13 05:37:15.888 [INFO][5518] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" HandleID="k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.888 [INFO][5518] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" HandleID="k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd7a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-a-8f52350bac", "pod":"calico-apiserver-679f59f4c7-nzrgh", "timestamp":"2025-10-13 05:37:15.888417255 +0000 UTC"}, Hostname:"ci-4487.0.0-a-8f52350bac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.888 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.888 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.888 [INFO][5518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-8f52350bac' Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.894 [INFO][5518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.897 [INFO][5518] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.900 [INFO][5518] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.901 [INFO][5518] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948509 containerd[2611]: 2025-10-13 05:37:15.902 [INFO][5518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948741 containerd[2611]: 2025-10-13 05:37:15.903 [INFO][5518] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948741 containerd[2611]: 2025-10-13 05:37:15.904 [INFO][5518] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b Oct 13 05:37:15.948741 containerd[2611]: 2025-10-13 05:37:15.908 [INFO][5518] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948741 containerd[2611]: 2025-10-13 05:37:15.917 [INFO][5518] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.194/26] block=192.168.125.192/26 handle="k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948741 containerd[2611]: 2025-10-13 05:37:15.918 [INFO][5518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.194/26] handle="k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:15.948741 containerd[2611]: 2025-10-13 05:37:15.918 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:15.948741 containerd[2611]: 2025-10-13 05:37:15.918 [INFO][5518] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.194/26] IPv6=[] ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" HandleID="k8s-pod-network.f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" Oct 13 05:37:15.948891 containerd[2611]: 2025-10-13 05:37:15.921 [INFO][5493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-nzrgh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0", GenerateName:"calico-apiserver-679f59f4c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d3e3e5b-e8b4-465f-a46c-a3e4fc460834", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679f59f4c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"", Pod:"calico-apiserver-679f59f4c7-nzrgh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califdb0794998c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:15.948958 containerd[2611]: 2025-10-13 05:37:15.922 [INFO][5493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.194/32] ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-nzrgh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" Oct 13 05:37:15.948958 containerd[2611]: 2025-10-13 05:37:15.922 [INFO][5493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdb0794998c ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-nzrgh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" Oct 13 05:37:15.948958 containerd[2611]: 2025-10-13 05:37:15.930 [INFO][5493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-nzrgh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" Oct 13 05:37:15.949032 containerd[2611]: 2025-10-13 05:37:15.932 [INFO][5493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-nzrgh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0", GenerateName:"calico-apiserver-679f59f4c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d3e3e5b-e8b4-465f-a46c-a3e4fc460834", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679f59f4c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b", Pod:"calico-apiserver-679f59f4c7-nzrgh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califdb0794998c", MAC:"aa:f0:f4:a1:93:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:15.949095 containerd[2611]: 2025-10-13 05:37:15.944 [INFO][5493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-nzrgh" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--nzrgh-eth0" Oct 13 05:37:16.027396 systemd-networkd[2245]: calie898961b60a: Link UP Oct 13 05:37:16.028788 systemd-networkd[2245]: calie898961b60a: Gained carrier Oct 13 05:37:16.044946 containerd[2611]: 2025-10-13 05:37:15.854 [INFO][5504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0 goldmane-54d579b49d- calico-system 36f0260e-2ad0-4893-b347-1a72b5811844 809 0 2025-10-13 05:36:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.0-a-8f52350bac goldmane-54d579b49d-rjkcm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie898961b60a [] [] }} ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Namespace="calico-system" Pod="goldmane-54d579b49d-rjkcm" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-" Oct 13 05:37:16.044946 containerd[2611]: 2025-10-13 05:37:15.854 [INFO][5504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Namespace="calico-system" Pod="goldmane-54d579b49d-rjkcm" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" Oct 13 05:37:16.044946 containerd[2611]: 2025-10-13 05:37:15.889 [INFO][5523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" HandleID="k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Workload="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:15.889 [INFO][5523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" HandleID="k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Workload="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-8f52350bac", "pod":"goldmane-54d579b49d-rjkcm", "timestamp":"2025-10-13 05:37:15.889332376 +0000 UTC"}, Hostname:"ci-4487.0.0-a-8f52350bac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:15.889 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:15.918 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:15.918 [INFO][5523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-8f52350bac' Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:15.994 [INFO][5523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:15.997 [INFO][5523] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:16.000 [INFO][5523] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:16.002 [INFO][5523] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045124 containerd[2611]: 2025-10-13 05:37:16.003 [INFO][5523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045677 containerd[2611]: 2025-10-13 05:37:16.003 [INFO][5523] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045677 containerd[2611]: 2025-10-13 05:37:16.005 [INFO][5523] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1 Oct 13 05:37:16.045677 containerd[2611]: 2025-10-13 05:37:16.009 [INFO][5523] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045677 containerd[2611]: 2025-10-13 05:37:16.018 [INFO][5523] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.195/26] block=192.168.125.192/26 handle="k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045677 containerd[2611]: 2025-10-13 05:37:16.018 [INFO][5523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.195/26] handle="k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.045677 containerd[2611]: 2025-10-13 05:37:16.018 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:16.045677 containerd[2611]: 2025-10-13 05:37:16.018 [INFO][5523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.195/26] IPv6=[] ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" HandleID="k8s-pod-network.b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Workload="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" Oct 13 05:37:16.045830 containerd[2611]: 2025-10-13 05:37:16.019 [INFO][5504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Namespace="calico-system" Pod="goldmane-54d579b49d-rjkcm" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"36f0260e-2ad0-4893-b347-1a72b5811844", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"", Pod:"goldmane-54d579b49d-rjkcm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie898961b60a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:16.045899 containerd[2611]: 2025-10-13 05:37:16.020 [INFO][5504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.195/32] ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Namespace="calico-system" Pod="goldmane-54d579b49d-rjkcm" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" Oct 13 05:37:16.045899 containerd[2611]: 2025-10-13 05:37:16.020 [INFO][5504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie898961b60a ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Namespace="calico-system" Pod="goldmane-54d579b49d-rjkcm" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" Oct 13 05:37:16.045899 containerd[2611]: 2025-10-13 05:37:16.031 [INFO][5504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Namespace="calico-system" Pod="goldmane-54d579b49d-rjkcm" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" Oct 13 05:37:16.045975 containerd[2611]: 2025-10-13 05:37:16.032 [INFO][5504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Namespace="calico-system" Pod="goldmane-54d579b49d-rjkcm" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"36f0260e-2ad0-4893-b347-1a72b5811844", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1", Pod:"goldmane-54d579b49d-rjkcm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie898961b60a", MAC:"fa:18:39:1a:dc:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:16.046034 containerd[2611]: 2025-10-13 05:37:16.042 [INFO][5504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" Namespace="calico-system" Pod="goldmane-54d579b49d-rjkcm" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-goldmane--54d579b49d--rjkcm-eth0" Oct 13 05:37:16.240111 containerd[2611]: time="2025-10-13T05:37:16.239565212Z" level=info msg="connecting to shim f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b" address="unix:///run/containerd/s/cec42b9484f92184fba59f92caa04552daa53f341f3ca8db706aa7b6e6b470c2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:16.245636 containerd[2611]: time="2025-10-13T05:37:16.245605464Z" level=info msg="connecting to shim b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1" address="unix:///run/containerd/s/9353bce66a6f82a5ee1ea1db6117b919ff56d7e202a8c1e8fc93709b06717d48" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:16.286350 systemd[1]: Started cri-containerd-f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b.scope - libcontainer container f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b. Oct 13 05:37:16.289510 systemd[1]: Started cri-containerd-b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1.scope - libcontainer container b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1. Oct 13 05:37:16.369451 containerd[2611]: time="2025-10-13T05:37:16.369367766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rjkcm,Uid:36f0260e-2ad0-4893-b347-1a72b5811844,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1\"" Oct 13 05:37:16.382474 containerd[2611]: time="2025-10-13T05:37:16.382448560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679f59f4c7-nzrgh,Uid:3d3e3e5b-e8b4-465f-a46c-a3e4fc460834,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b\"" Oct 13 05:37:16.703692 containerd[2611]: time="2025-10-13T05:37:16.703656940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.706232 containerd[2611]: time="2025-10-13T05:37:16.706175659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:37:16.709221 containerd[2611]: time="2025-10-13T05:37:16.708879643Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.712592 containerd[2611]: time="2025-10-13T05:37:16.712563464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.713164 containerd[2611]: time="2025-10-13T05:37:16.713141037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.553696201s" Oct 13 05:37:16.713262 containerd[2611]: time="2025-10-13T05:37:16.713249238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:37:16.714484 containerd[2611]: time="2025-10-13T05:37:16.714137038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:37:16.721002 containerd[2611]: time="2025-10-13T05:37:16.720955681Z" level=info msg="CreateContainer within sandbox \"210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:37:16.739231 containerd[2611]: time="2025-10-13T05:37:16.738490477Z" level=info msg="Container 1d9336cd636f4d42f4d2243d57723eff7e92c8f2487690be384163df214b8313: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:16.757900 containerd[2611]: time="2025-10-13T05:37:16.757872052Z" level=info msg="CreateContainer within sandbox \"210d2fbd86bcb2595750a7be8727dd8583d916f537b4637154c9846fb67c924f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1d9336cd636f4d42f4d2243d57723eff7e92c8f2487690be384163df214b8313\"" Oct 13 05:37:16.758540 containerd[2611]: time="2025-10-13T05:37:16.758477576Z" level=info msg="StartContainer for \"1d9336cd636f4d42f4d2243d57723eff7e92c8f2487690be384163df214b8313\"" Oct 13 05:37:16.759725 containerd[2611]: time="2025-10-13T05:37:16.759698031Z" level=info msg="connecting to shim 1d9336cd636f4d42f4d2243d57723eff7e92c8f2487690be384163df214b8313" address="unix:///run/containerd/s/11f64fed64587e4af9fb32865745a24bb9032ce7f12dc832a9348af6b282663e" protocol=ttrpc version=3 Oct 13 05:37:16.781383 systemd[1]: Started cri-containerd-1d9336cd636f4d42f4d2243d57723eff7e92c8f2487690be384163df214b8313.scope - libcontainer container 1d9336cd636f4d42f4d2243d57723eff7e92c8f2487690be384163df214b8313. Oct 13 05:37:16.801534 containerd[2611]: time="2025-10-13T05:37:16.800399061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fff2d,Uid:c6cf30b2-c7a0-46c2-9758-3e0cae4f8800,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:16.802483 containerd[2611]: time="2025-10-13T05:37:16.802434587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679f59f4c7-gm94f,Uid:03e171aa-6801-4329-80e3-ff8dedaab170,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:37:16.815664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134785535.mount: Deactivated successfully. Oct 13 05:37:16.863760 containerd[2611]: time="2025-10-13T05:37:16.862608049Z" level=info msg="StartContainer for \"1d9336cd636f4d42f4d2243d57723eff7e92c8f2487690be384163df214b8313\" returns successfully" Oct 13 05:37:16.953358 systemd-networkd[2245]: cali1cba19a7b5b: Link UP Oct 13 05:37:16.953908 systemd-networkd[2245]: cali1cba19a7b5b: Gained carrier Oct 13 05:37:16.965231 containerd[2611]: 2025-10-13 05:37:16.884 [INFO][5687] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0 calico-apiserver-679f59f4c7- calico-apiserver 03e171aa-6801-4329-80e3-ff8dedaab170 818 0 2025-10-13 05:36:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:679f59f4c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-a-8f52350bac calico-apiserver-679f59f4c7-gm94f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1cba19a7b5b [] [] }} ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-gm94f" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-" Oct 13 05:37:16.965231 containerd[2611]: 2025-10-13 05:37:16.884 [INFO][5687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-gm94f" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" Oct 13 05:37:16.965231 containerd[2611]: 2025-10-13 05:37:16.910 [INFO][5717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" HandleID="k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.910 [INFO][5717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" HandleID="k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-a-8f52350bac", "pod":"calico-apiserver-679f59f4c7-gm94f", "timestamp":"2025-10-13 05:37:16.910673788 +0000 UTC"}, Hostname:"ci-4487.0.0-a-8f52350bac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.910 [INFO][5717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.911 [INFO][5717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.911 [INFO][5717] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-8f52350bac' Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.917 [INFO][5717] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.921 [INFO][5717] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.925 [INFO][5717] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.927 [INFO][5717] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965423 containerd[2611]: 2025-10-13 05:37:16.929 [INFO][5717] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965636 containerd[2611]: 2025-10-13 05:37:16.929 [INFO][5717] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965636 containerd[2611]: 2025-10-13 05:37:16.931 [INFO][5717] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3 Oct 13 05:37:16.965636 containerd[2611]: 2025-10-13 05:37:16.937 [INFO][5717] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965636 containerd[2611]: 2025-10-13 05:37:16.944 [INFO][5717] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.196/26] block=192.168.125.192/26 handle="k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965636 containerd[2611]: 2025-10-13 05:37:16.944 [INFO][5717] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.196/26] handle="k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:16.965636 containerd[2611]: 2025-10-13 05:37:16.944 [INFO][5717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:16.965636 containerd[2611]: 2025-10-13 05:37:16.945 [INFO][5717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.196/26] IPv6=[] ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" HandleID="k8s-pod-network.731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" Oct 13 05:37:16.967629 containerd[2611]: 2025-10-13 05:37:16.947 [INFO][5687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-gm94f" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0", GenerateName:"calico-apiserver-679f59f4c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"03e171aa-6801-4329-80e3-ff8dedaab170", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679f59f4c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"", Pod:"calico-apiserver-679f59f4c7-gm94f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cba19a7b5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:16.967712 containerd[2611]: 2025-10-13 05:37:16.948 [INFO][5687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.196/32] ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-gm94f" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" Oct 13 05:37:16.967712 containerd[2611]: 2025-10-13 05:37:16.948 [INFO][5687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cba19a7b5b ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-gm94f" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" Oct 13 05:37:16.967712 containerd[2611]: 2025-10-13 05:37:16.952 [INFO][5687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-gm94f" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" Oct 13 05:37:16.967786 containerd[2611]: 2025-10-13 05:37:16.952 [INFO][5687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-gm94f" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0", GenerateName:"calico-apiserver-679f59f4c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"03e171aa-6801-4329-80e3-ff8dedaab170", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679f59f4c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3", Pod:"calico-apiserver-679f59f4c7-gm94f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cba19a7b5b", MAC:"c6:fd:e7:c9:4d:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:16.967844 containerd[2611]: 2025-10-13 05:37:16.963 [INFO][5687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" Namespace="calico-apiserver" Pod="calico-apiserver-679f59f4c7-gm94f" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--apiserver--679f59f4c7--gm94f-eth0" Oct 13 05:37:17.022328 containerd[2611]: time="2025-10-13T05:37:17.022292542Z" level=info msg="connecting to shim 731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3" address="unix:///run/containerd/s/e24b637c86f114fd028b7d4bd37acd67d8470510b0e50178c6eff3cf583d0321" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:17.052388 systemd[1]: Started cri-containerd-731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3.scope - libcontainer container 731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3. Oct 13 05:37:17.071817 systemd-networkd[2245]: caliee882def45d: Link UP Oct 13 05:37:17.073006 systemd-networkd[2245]: caliee882def45d: Gained carrier Oct 13 05:37:17.088364 kubelet[4082]: I1013 05:37:17.088317 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-676db78f7-mlnzh" podStartSLOduration=0.954788948 podStartE2EDuration="5.088298836s" podCreationTimestamp="2025-10-13 05:37:12 +0000 UTC" firstStartedPulling="2025-10-13 05:37:12.580512081 +0000 UTC m=+39.890942041" lastFinishedPulling="2025-10-13 05:37:16.714021966 +0000 UTC m=+44.024451929" observedRunningTime="2025-10-13 05:37:16.991498516 +0000 UTC m=+44.301928472" watchObservedRunningTime="2025-10-13 05:37:17.088298836 +0000 UTC m=+44.398728791" Oct 13 05:37:17.092074 containerd[2611]: 2025-10-13 05:37:16.862 [INFO][5676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0 coredns-674b8bbfcf- kube-system c6cf30b2-c7a0-46c2-9758-3e0cae4f8800 820 0 2025-10-13 05:36:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-a-8f52350bac coredns-674b8bbfcf-fff2d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliee882def45d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Namespace="kube-system" Pod="coredns-674b8bbfcf-fff2d" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-" Oct 13 05:37:17.092074 containerd[2611]: 2025-10-13 05:37:16.862 [INFO][5676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Namespace="kube-system" Pod="coredns-674b8bbfcf-fff2d" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" Oct 13 05:37:17.092074 containerd[2611]: 2025-10-13 05:37:16.915 [INFO][5709] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" HandleID="k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Workload="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:16.916 [INFO][5709] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" HandleID="k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Workload="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-a-8f52350bac", "pod":"coredns-674b8bbfcf-fff2d", "timestamp":"2025-10-13 05:37:16.91575437 +0000 UTC"}, Hostname:"ci-4487.0.0-a-8f52350bac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:16.916 [INFO][5709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:16.945 [INFO][5709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:16.945 [INFO][5709] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-8f52350bac' Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:17.019 [INFO][5709] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:17.027 [INFO][5709] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:17.038 [INFO][5709] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:17.040 [INFO][5709] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092251 containerd[2611]: 2025-10-13 05:37:17.042 [INFO][5709] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092485 containerd[2611]: 2025-10-13 05:37:17.042 [INFO][5709] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092485 containerd[2611]: 2025-10-13 05:37:17.044 [INFO][5709] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368 Oct 13 05:37:17.092485 containerd[2611]: 2025-10-13 05:37:17.051 [INFO][5709] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092485 containerd[2611]: 2025-10-13 05:37:17.063 [INFO][5709] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.197/26] block=192.168.125.192/26 handle="k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092485 containerd[2611]: 2025-10-13 05:37:17.063 [INFO][5709] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.197/26] handle="k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.092485 containerd[2611]: 2025-10-13 05:37:17.063 [INFO][5709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:17.092485 containerd[2611]: 2025-10-13 05:37:17.063 [INFO][5709] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.197/26] IPv6=[] ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" HandleID="k8s-pod-network.0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Workload="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" Oct 13 05:37:17.092639 containerd[2611]: 2025-10-13 05:37:17.066 [INFO][5676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Namespace="kube-system" Pod="coredns-674b8bbfcf-fff2d" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c6cf30b2-c7a0-46c2-9758-3e0cae4f8800", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"", Pod:"coredns-674b8bbfcf-fff2d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee882def45d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:17.092639 containerd[2611]: 2025-10-13 05:37:17.067 [INFO][5676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.197/32] ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Namespace="kube-system" Pod="coredns-674b8bbfcf-fff2d" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" Oct 13 05:37:17.092639 containerd[2611]: 2025-10-13 05:37:17.067 [INFO][5676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee882def45d ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Namespace="kube-system" Pod="coredns-674b8bbfcf-fff2d" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" Oct 13 05:37:17.092639 containerd[2611]: 2025-10-13 05:37:17.074 [INFO][5676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Namespace="kube-system" Pod="coredns-674b8bbfcf-fff2d" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" Oct 13 05:37:17.092639 containerd[2611]: 2025-10-13 05:37:17.075 [INFO][5676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Namespace="kube-system" Pod="coredns-674b8bbfcf-fff2d" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c6cf30b2-c7a0-46c2-9758-3e0cae4f8800", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368", Pod:"coredns-674b8bbfcf-fff2d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee882def45d", MAC:"7a:83:50:96:48:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:17.092639 containerd[2611]: 2025-10-13 05:37:17.089 [INFO][5676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" Namespace="kube-system" Pod="coredns-674b8bbfcf-fff2d" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--fff2d-eth0" Oct 13 05:37:17.133036 containerd[2611]: time="2025-10-13T05:37:17.133008491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679f59f4c7-gm94f,Uid:03e171aa-6801-4329-80e3-ff8dedaab170,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3\"" Oct 13 05:37:17.145269 containerd[2611]: time="2025-10-13T05:37:17.145044982Z" level=info msg="connecting to shim 0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368" address="unix:///run/containerd/s/491d8c709caa1a30962eaaf766d3095a88aec567fa57fea4e529227497e80476" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:17.163348 systemd[1]: Started cri-containerd-0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368.scope - libcontainer container 0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368. Oct 13 05:37:17.202475 containerd[2611]: time="2025-10-13T05:37:17.202449776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fff2d,Uid:c6cf30b2-c7a0-46c2-9758-3e0cae4f8800,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368\"" Oct 13 05:37:17.212899 containerd[2611]: time="2025-10-13T05:37:17.212835293Z" level=info msg="CreateContainer within sandbox \"0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:37:17.230695 containerd[2611]: time="2025-10-13T05:37:17.230670490Z" level=info msg="Container ae6c5e510bafcd70d59bdb1b9fadaf0329b00b01eef9aca13c7d21192796c46b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:17.243091 containerd[2611]: time="2025-10-13T05:37:17.243066379Z" level=info msg="CreateContainer within sandbox \"0d5bd63d4aa413f6d38e0e3d400ea1ad00fb17cfda805d118adfa84c7604b368\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae6c5e510bafcd70d59bdb1b9fadaf0329b00b01eef9aca13c7d21192796c46b\"" Oct 13 05:37:17.243632 containerd[2611]: time="2025-10-13T05:37:17.243513278Z" level=info msg="StartContainer for \"ae6c5e510bafcd70d59bdb1b9fadaf0329b00b01eef9aca13c7d21192796c46b\"" Oct 13 05:37:17.244522 containerd[2611]: time="2025-10-13T05:37:17.244493183Z" level=info msg="connecting to shim ae6c5e510bafcd70d59bdb1b9fadaf0329b00b01eef9aca13c7d21192796c46b" address="unix:///run/containerd/s/491d8c709caa1a30962eaaf766d3095a88aec567fa57fea4e529227497e80476" protocol=ttrpc version=3 Oct 13 05:37:17.245426 systemd-networkd[2245]: calie898961b60a: Gained IPv6LL Oct 13 05:37:17.265377 systemd[1]: Started cri-containerd-ae6c5e510bafcd70d59bdb1b9fadaf0329b00b01eef9aca13c7d21192796c46b.scope - libcontainer container ae6c5e510bafcd70d59bdb1b9fadaf0329b00b01eef9aca13c7d21192796c46b. Oct 13 05:37:17.294722 containerd[2611]: time="2025-10-13T05:37:17.294695100Z" level=info msg="StartContainer for \"ae6c5e510bafcd70d59bdb1b9fadaf0329b00b01eef9aca13c7d21192796c46b\" returns successfully" Oct 13 05:37:17.437334 systemd-networkd[2245]: califdb0794998c: Gained IPv6LL Oct 13 05:37:17.799836 containerd[2611]: time="2025-10-13T05:37:17.799797660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b98dd68c-2ngs9,Uid:93f709b6-d5d3-4324-9bd3-4718259e1d38,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:17.902296 systemd-networkd[2245]: cali426d1087dc1: Link UP Oct 13 05:37:17.904251 systemd-networkd[2245]: cali426d1087dc1: Gained carrier Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.838 [INFO][5873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0 calico-kube-controllers-86b98dd68c- calico-system 93f709b6-d5d3-4324-9bd3-4718259e1d38 822 0 2025-10-13 05:36:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86b98dd68c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.0-a-8f52350bac calico-kube-controllers-86b98dd68c-2ngs9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali426d1087dc1 [] [] }} ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Namespace="calico-system" Pod="calico-kube-controllers-86b98dd68c-2ngs9" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.838 [INFO][5873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Namespace="calico-system" Pod="calico-kube-controllers-86b98dd68c-2ngs9" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.861 [INFO][5884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" HandleID="k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.862 [INFO][5884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" HandleID="k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad570), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-8f52350bac", "pod":"calico-kube-controllers-86b98dd68c-2ngs9", "timestamp":"2025-10-13 05:37:17.861943132 +0000 UTC"}, Hostname:"ci-4487.0.0-a-8f52350bac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.862 [INFO][5884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.862 [INFO][5884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.862 [INFO][5884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-8f52350bac' Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.867 [INFO][5884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.870 [INFO][5884] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.874 [INFO][5884] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.876 [INFO][5884] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.878 [INFO][5884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.878 [INFO][5884] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.879 [INFO][5884] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577 Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.884 [INFO][5884] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.894 [INFO][5884] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.198/26] block=192.168.125.192/26 handle="k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.894 [INFO][5884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.198/26] handle="k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.894 [INFO][5884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:17.935232 containerd[2611]: 2025-10-13 05:37:17.894 [INFO][5884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.198/26] IPv6=[] ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" HandleID="k8s-pod-network.5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Workload="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" Oct 13 05:37:17.936122 containerd[2611]: 2025-10-13 05:37:17.897 [INFO][5873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Namespace="calico-system" Pod="calico-kube-controllers-86b98dd68c-2ngs9" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0", GenerateName:"calico-kube-controllers-86b98dd68c-", Namespace:"calico-system", SelfLink:"", UID:"93f709b6-d5d3-4324-9bd3-4718259e1d38", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b98dd68c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"", Pod:"calico-kube-controllers-86b98dd68c-2ngs9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali426d1087dc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:17.936122 containerd[2611]: 2025-10-13 05:37:17.897 [INFO][5873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.198/32] ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Namespace="calico-system" Pod="calico-kube-controllers-86b98dd68c-2ngs9" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" Oct 13 05:37:17.936122 containerd[2611]: 2025-10-13 05:37:17.897 [INFO][5873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali426d1087dc1 ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Namespace="calico-system" Pod="calico-kube-controllers-86b98dd68c-2ngs9" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" Oct 13 05:37:17.936122 containerd[2611]: 2025-10-13 05:37:17.906 [INFO][5873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Namespace="calico-system" Pod="calico-kube-controllers-86b98dd68c-2ngs9" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" Oct 13 05:37:17.936122 containerd[2611]: 2025-10-13 05:37:17.906 [INFO][5873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Namespace="calico-system" Pod="calico-kube-controllers-86b98dd68c-2ngs9" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0", GenerateName:"calico-kube-controllers-86b98dd68c-", Namespace:"calico-system", SelfLink:"", UID:"93f709b6-d5d3-4324-9bd3-4718259e1d38", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b98dd68c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577", Pod:"calico-kube-controllers-86b98dd68c-2ngs9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali426d1087dc1", MAC:"4e:16:ca:f3:75:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:17.936122 containerd[2611]: 2025-10-13 05:37:17.931 [INFO][5873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" Namespace="calico-system" Pod="calico-kube-controllers-86b98dd68c-2ngs9" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-calico--kube--controllers--86b98dd68c--2ngs9-eth0" Oct 13 05:37:18.013401 kubelet[4082]: I1013 05:37:18.012915 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fff2d" podStartSLOduration=39.012898682 podStartE2EDuration="39.012898682s" podCreationTimestamp="2025-10-13 05:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:37:18.007619685 +0000 UTC m=+45.318049663" watchObservedRunningTime="2025-10-13 05:37:18.012898682 +0000 UTC m=+45.323328630" Oct 13 05:37:18.027701 containerd[2611]: time="2025-10-13T05:37:18.027594803Z" level=info msg="connecting to shim 5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577" address="unix:///run/containerd/s/2aef21f262243496fc2171894f199afc21cc56374800522f36c26d3e180b913f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:18.085607 systemd[1]: Started cri-containerd-5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577.scope - libcontainer container 5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577. Oct 13 05:37:18.171996 containerd[2611]: time="2025-10-13T05:37:18.171957837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b98dd68c-2ngs9,Uid:93f709b6-d5d3-4324-9bd3-4718259e1d38,Namespace:calico-system,Attempt:0,} returns sandbox id \"5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577\"" Oct 13 05:37:18.270358 systemd-networkd[2245]: caliee882def45d: Gained IPv6LL Oct 13 05:37:18.800964 containerd[2611]: time="2025-10-13T05:37:18.800821837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zlppj,Uid:3c2c8351-e421-4ee2-bd9d-c438fee1b10f,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:18.957330 systemd-networkd[2245]: calief98e96d29d: Link UP Oct 13 05:37:18.958570 systemd-networkd[2245]: calief98e96d29d: Gained carrier Oct 13 05:37:18.973630 systemd-networkd[2245]: cali1cba19a7b5b: Gained IPv6LL Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.876 [INFO][5963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0 coredns-674b8bbfcf- kube-system 3c2c8351-e421-4ee2-bd9d-c438fee1b10f 804 0 2025-10-13 05:36:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-a-8f52350bac coredns-674b8bbfcf-zlppj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief98e96d29d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlppj" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.877 [INFO][5963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlppj" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.915 [INFO][5974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" HandleID="k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Workload="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.915 [INFO][5974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" HandleID="k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Workload="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfce0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-a-8f52350bac", "pod":"coredns-674b8bbfcf-zlppj", "timestamp":"2025-10-13 05:37:18.915407592 +0000 UTC"}, Hostname:"ci-4487.0.0-a-8f52350bac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.915 [INFO][5974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.915 [INFO][5974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.915 [INFO][5974] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-8f52350bac' Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.922 [INFO][5974] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.926 [INFO][5974] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.929 [INFO][5974] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.931 [INFO][5974] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.933 [INFO][5974] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.933 [INFO][5974] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.935 [INFO][5974] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581 Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.942 [INFO][5974] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.951 [INFO][5974] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.199/26] block=192.168.125.192/26 handle="k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.952 [INFO][5974] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.199/26] handle="k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.952 [INFO][5974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:18.979954 containerd[2611]: 2025-10-13 05:37:18.952 [INFO][5974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.199/26] IPv6=[] ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" HandleID="k8s-pod-network.67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Workload="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" Oct 13 05:37:18.982540 containerd[2611]: 2025-10-13 05:37:18.954 [INFO][5963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlppj" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3c2c8351-e421-4ee2-bd9d-c438fee1b10f", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"", Pod:"coredns-674b8bbfcf-zlppj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief98e96d29d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:18.982540 containerd[2611]: 2025-10-13 05:37:18.954 [INFO][5963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.199/32] ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlppj" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" Oct 13 05:37:18.982540 containerd[2611]: 2025-10-13 05:37:18.954 [INFO][5963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief98e96d29d ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlppj" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" Oct 13 05:37:18.982540 containerd[2611]: 2025-10-13 05:37:18.959 [INFO][5963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlppj" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" Oct 13 05:37:18.982540 containerd[2611]: 2025-10-13 05:37:18.960 [INFO][5963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlppj" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3c2c8351-e421-4ee2-bd9d-c438fee1b10f", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581", Pod:"coredns-674b8bbfcf-zlppj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief98e96d29d", MAC:"0e:ff:7f:f2:dc:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:18.982540 containerd[2611]: 2025-10-13 05:37:18.977 [INFO][5963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlppj" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-coredns--674b8bbfcf--zlppj-eth0" Oct 13 05:37:19.024106 containerd[2611]: time="2025-10-13T05:37:19.023497005Z" level=info msg="connecting to shim 67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581" address="unix:///run/containerd/s/a3240036cb404a98aa9dd61455b0b24c3335d4bc55d981bf4df2971f1076235a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:19.052529 systemd[1]: Started cri-containerd-67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581.scope - libcontainer container 67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581. Oct 13 05:37:19.141504 containerd[2611]: time="2025-10-13T05:37:19.141451773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zlppj,Uid:3c2c8351-e421-4ee2-bd9d-c438fee1b10f,Namespace:kube-system,Attempt:0,} returns sandbox id \"67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581\"" Oct 13 05:37:19.156361 containerd[2611]: time="2025-10-13T05:37:19.156329810Z" level=info msg="CreateContainer within sandbox \"67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:37:19.183009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281417057.mount: Deactivated successfully. Oct 13 05:37:19.190072 containerd[2611]: time="2025-10-13T05:37:19.189950340Z" level=info msg="Container ac571e47357302680d00d37280262a3fff8e71d08667447a4f5216d208de32c8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:19.217025 containerd[2611]: time="2025-10-13T05:37:19.217000663Z" level=info msg="CreateContainer within sandbox \"67a50dd3aaff0812fb656e91832200064d3c2007e304cadfdad6aa4600e69581\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac571e47357302680d00d37280262a3fff8e71d08667447a4f5216d208de32c8\"" Oct 13 05:37:19.217623 containerd[2611]: time="2025-10-13T05:37:19.217600741Z" level=info msg="StartContainer for \"ac571e47357302680d00d37280262a3fff8e71d08667447a4f5216d208de32c8\"" Oct 13 05:37:19.218647 containerd[2611]: time="2025-10-13T05:37:19.218570071Z" level=info msg="connecting to shim ac571e47357302680d00d37280262a3fff8e71d08667447a4f5216d208de32c8" address="unix:///run/containerd/s/a3240036cb404a98aa9dd61455b0b24c3335d4bc55d981bf4df2971f1076235a" protocol=ttrpc version=3 Oct 13 05:37:19.238521 systemd[1]: Started cri-containerd-ac571e47357302680d00d37280262a3fff8e71d08667447a4f5216d208de32c8.scope - libcontainer container ac571e47357302680d00d37280262a3fff8e71d08667447a4f5216d208de32c8. Oct 13 05:37:19.280721 containerd[2611]: time="2025-10-13T05:37:19.280636171Z" level=info msg="StartContainer for \"ac571e47357302680d00d37280262a3fff8e71d08667447a4f5216d208de32c8\" returns successfully" Oct 13 05:37:19.716602 containerd[2611]: time="2025-10-13T05:37:19.716564041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:19.719369 containerd[2611]: time="2025-10-13T05:37:19.719329258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:37:19.725189 containerd[2611]: time="2025-10-13T05:37:19.725134537Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:19.734596 containerd[2611]: time="2025-10-13T05:37:19.734553716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:19.735187 containerd[2611]: time="2025-10-13T05:37:19.735035163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.020871472s" Oct 13 05:37:19.735293 containerd[2611]: time="2025-10-13T05:37:19.735280523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:37:19.738014 containerd[2611]: time="2025-10-13T05:37:19.737988978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:37:19.743125 containerd[2611]: time="2025-10-13T05:37:19.743091817Z" level=info msg="CreateContainer within sandbox \"b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:37:19.763336 containerd[2611]: time="2025-10-13T05:37:19.763312310Z" level=info msg="Container d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:19.780424 containerd[2611]: time="2025-10-13T05:37:19.780395301Z" level=info msg="CreateContainer within sandbox \"b2d064c9103840d337cf684152124b5fa6a89262ff5a0f6b9d65849496ee30b1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\"" Oct 13 05:37:19.780976 containerd[2611]: time="2025-10-13T05:37:19.780949694Z" level=info msg="StartContainer for \"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\"" Oct 13 05:37:19.782349 containerd[2611]: time="2025-10-13T05:37:19.782318970Z" level=info msg="connecting to shim d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88" address="unix:///run/containerd/s/9353bce66a6f82a5ee1ea1db6117b919ff56d7e202a8c1e8fc93709b06717d48" protocol=ttrpc version=3 Oct 13 05:37:19.797384 systemd[1]: Started cri-containerd-d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88.scope - libcontainer container d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88. Oct 13 05:37:19.800849 containerd[2611]: time="2025-10-13T05:37:19.800822858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zl54c,Uid:bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:19.805373 systemd-networkd[2245]: cali426d1087dc1: Gained IPv6LL Oct 13 05:37:19.872121 containerd[2611]: time="2025-10-13T05:37:19.872091878Z" level=info msg="StartContainer for \"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" returns successfully" Oct 13 05:37:19.917016 systemd-networkd[2245]: cali6a51063b508: Link UP Oct 13 05:37:19.917836 systemd-networkd[2245]: cali6a51063b508: Gained carrier Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.848 [INFO][6099] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0 csi-node-driver- calico-system bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f 699 0 2025-10-13 05:36:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.0-a-8f52350bac csi-node-driver-zl54c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6a51063b508 [] [] }} ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Namespace="calico-system" Pod="csi-node-driver-zl54c" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.848 [INFO][6099] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Namespace="calico-system" Pod="csi-node-driver-zl54c" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.877 [INFO][6111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" HandleID="k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Workload="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.877 [INFO][6111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" HandleID="k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Workload="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-8f52350bac", "pod":"csi-node-driver-zl54c", "timestamp":"2025-10-13 05:37:19.877433587 +0000 UTC"}, Hostname:"ci-4487.0.0-a-8f52350bac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.877 [INFO][6111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.877 [INFO][6111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.877 [INFO][6111] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-8f52350bac' Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.884 [INFO][6111] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.888 [INFO][6111] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.891 [INFO][6111] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.893 [INFO][6111] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.895 [INFO][6111] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.895 [INFO][6111] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.896 [INFO][6111] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87 Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.901 [INFO][6111] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.912 [INFO][6111] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.200/26] block=192.168.125.192/26 handle="k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.912 [INFO][6111] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.200/26] handle="k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" host="ci-4487.0.0-a-8f52350bac" Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.912 [INFO][6111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:19.942214 containerd[2611]: 2025-10-13 05:37:19.912 [INFO][6111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.200/26] IPv6=[] ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" HandleID="k8s-pod-network.f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Workload="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" Oct 13 05:37:19.942945 containerd[2611]: 2025-10-13 05:37:19.914 [INFO][6099] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Namespace="calico-system" Pod="csi-node-driver-zl54c" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"", Pod:"csi-node-driver-zl54c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a51063b508", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:19.942945 containerd[2611]: 2025-10-13 05:37:19.914 [INFO][6099] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.200/32] ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Namespace="calico-system" Pod="csi-node-driver-zl54c" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" Oct 13 05:37:19.942945 containerd[2611]: 2025-10-13 05:37:19.914 [INFO][6099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a51063b508 ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Namespace="calico-system" Pod="csi-node-driver-zl54c" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" Oct 13 05:37:19.942945 containerd[2611]: 2025-10-13 05:37:19.919 [INFO][6099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Namespace="calico-system" Pod="csi-node-driver-zl54c" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" Oct 13 05:37:19.942945 containerd[2611]: 2025-10-13 05:37:19.922 [INFO][6099] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Namespace="calico-system" Pod="csi-node-driver-zl54c" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-8f52350bac", ContainerID:"f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87", Pod:"csi-node-driver-zl54c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a51063b508", MAC:"02:d9:3c:4a:ff:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:19.942945 containerd[2611]: 2025-10-13 05:37:19.940 [INFO][6099] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" Namespace="calico-system" Pod="csi-node-driver-zl54c" WorkloadEndpoint="ci--4487.0.0--a--8f52350bac-k8s-csi--node--driver--zl54c-eth0" Oct 13 05:37:19.996270 containerd[2611]: time="2025-10-13T05:37:19.995482608Z" level=info msg="connecting to shim f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87" address="unix:///run/containerd/s/dc53e1b8eb6b710a7ac2496b333df1dd8fa91a80647b6685df6daf4ee9021e4a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:20.027393 systemd[1]: Started cri-containerd-f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87.scope - libcontainer container f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87. Oct 13 05:37:20.045879 kubelet[4082]: I1013 05:37:20.045763 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zlppj" podStartSLOduration=41.045746579 podStartE2EDuration="41.045746579s" podCreationTimestamp="2025-10-13 05:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:37:20.023016805 +0000 UTC m=+47.333446760" watchObservedRunningTime="2025-10-13 05:37:20.045746579 +0000 UTC m=+47.356176541" Oct 13 05:37:20.063846 kubelet[4082]: I1013 05:37:20.063801 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-rjkcm" podStartSLOduration=25.697913813 podStartE2EDuration="29.06378557s" podCreationTimestamp="2025-10-13 05:36:51 +0000 UTC" firstStartedPulling="2025-10-13 05:37:16.370572877 +0000 UTC m=+43.681002832" lastFinishedPulling="2025-10-13 05:37:19.736444643 +0000 UTC m=+47.046874589" observedRunningTime="2025-10-13 05:37:20.045419767 +0000 UTC m=+47.355849720" watchObservedRunningTime="2025-10-13 05:37:20.06378557 +0000 UTC m=+47.374215520" Oct 13 05:37:20.095545 containerd[2611]: time="2025-10-13T05:37:20.095515861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zl54c,Uid:bcbf5018-3ffe-4d13-9b7d-1211daeb0d5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87\"" Oct 13 05:37:20.317338 systemd-networkd[2245]: calief98e96d29d: Gained IPv6LL Oct 13 05:37:21.103670 containerd[2611]: time="2025-10-13T05:37:21.103606319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"44e42254bc17b3df3671123e73cdb22195a59470dcf2eaacd5782c75f5fb70f5\" pid:6204 exit_status:1 exited_at:{seconds:1760333841 nanos:102932179}" Oct 13 05:37:21.277452 systemd-networkd[2245]: cali6a51063b508: Gained IPv6LL Oct 13 05:37:22.118756 containerd[2611]: time="2025-10-13T05:37:22.118717765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"2f1914d906cc09c6f3a362d15dcadea5e4d16e135e766fa8b541dcfec497e14e\" pid:6234 exit_status:1 exited_at:{seconds:1760333842 nanos:118440666}" Oct 13 05:37:22.368584 containerd[2611]: time="2025-10-13T05:37:22.368541065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:22.371095 containerd[2611]: time="2025-10-13T05:37:22.370896216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:37:22.375308 containerd[2611]: time="2025-10-13T05:37:22.375246184Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:22.379807 containerd[2611]: time="2025-10-13T05:37:22.379651583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:22.380136 containerd[2611]: time="2025-10-13T05:37:22.380111522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.641984213s" Oct 13 05:37:22.380178 containerd[2611]: time="2025-10-13T05:37:22.380144920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:37:22.381459 containerd[2611]: time="2025-10-13T05:37:22.381415623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:37:22.414250 containerd[2611]: time="2025-10-13T05:37:22.414194046Z" level=info msg="CreateContainer within sandbox \"f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:37:22.435836 containerd[2611]: time="2025-10-13T05:37:22.435801615Z" level=info msg="Container 3a4f5164142d0dcf40f23bafc2d198b2769480909f49803b8d40765dd5330c13: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:22.451823 containerd[2611]: time="2025-10-13T05:37:22.451797823Z" level=info msg="CreateContainer within sandbox \"f703f24b22636d1903d514a8880df743e269e0ad83c1ef5591b83b7ebf347a2b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3a4f5164142d0dcf40f23bafc2d198b2769480909f49803b8d40765dd5330c13\"" Oct 13 05:37:22.453221 containerd[2611]: time="2025-10-13T05:37:22.452193117Z" level=info msg="StartContainer for \"3a4f5164142d0dcf40f23bafc2d198b2769480909f49803b8d40765dd5330c13\"" Oct 13 05:37:22.453474 containerd[2611]: time="2025-10-13T05:37:22.453440830Z" level=info msg="connecting to shim 3a4f5164142d0dcf40f23bafc2d198b2769480909f49803b8d40765dd5330c13" address="unix:///run/containerd/s/cec42b9484f92184fba59f92caa04552daa53f341f3ca8db706aa7b6e6b470c2" protocol=ttrpc version=3 Oct 13 05:37:22.478339 systemd[1]: Started cri-containerd-3a4f5164142d0dcf40f23bafc2d198b2769480909f49803b8d40765dd5330c13.scope - libcontainer container 3a4f5164142d0dcf40f23bafc2d198b2769480909f49803b8d40765dd5330c13. Oct 13 05:37:22.534196 containerd[2611]: time="2025-10-13T05:37:22.534172265Z" level=info msg="StartContainer for \"3a4f5164142d0dcf40f23bafc2d198b2769480909f49803b8d40765dd5330c13\" returns successfully" Oct 13 05:37:22.772854 containerd[2611]: time="2025-10-13T05:37:22.772270658Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:22.780363 containerd[2611]: time="2025-10-13T05:37:22.780338682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:37:22.782109 containerd[2611]: time="2025-10-13T05:37:22.782087931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 400.647072ms" Oct 13 05:37:22.782194 containerd[2611]: time="2025-10-13T05:37:22.782183957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:37:22.783824 containerd[2611]: time="2025-10-13T05:37:22.783801144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:37:22.789921 containerd[2611]: time="2025-10-13T05:37:22.789900490Z" level=info msg="CreateContainer within sandbox \"731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:37:22.812352 containerd[2611]: time="2025-10-13T05:37:22.812327738Z" level=info msg="Container 3d24ae227f614da2978b2c6566c3714be692baba5a9d682ad1ece84e2a224ae8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:22.829639 containerd[2611]: time="2025-10-13T05:37:22.829600960Z" level=info msg="CreateContainer within sandbox \"731211d4c0d095f9c66b310345bc0ce9af401662eae879583cf828b61c25e9e3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3d24ae227f614da2978b2c6566c3714be692baba5a9d682ad1ece84e2a224ae8\"" Oct 13 05:37:22.830992 containerd[2611]: time="2025-10-13T05:37:22.830791141Z" level=info msg="StartContainer for \"3d24ae227f614da2978b2c6566c3714be692baba5a9d682ad1ece84e2a224ae8\"" Oct 13 05:37:22.832728 containerd[2611]: time="2025-10-13T05:37:22.832704459Z" level=info msg="connecting to shim 3d24ae227f614da2978b2c6566c3714be692baba5a9d682ad1ece84e2a224ae8" address="unix:///run/containerd/s/e24b637c86f114fd028b7d4bd37acd67d8470510b0e50178c6eff3cf583d0321" protocol=ttrpc version=3 Oct 13 05:37:22.857346 systemd[1]: Started cri-containerd-3d24ae227f614da2978b2c6566c3714be692baba5a9d682ad1ece84e2a224ae8.scope - libcontainer container 3d24ae227f614da2978b2c6566c3714be692baba5a9d682ad1ece84e2a224ae8. Oct 13 05:37:22.917080 containerd[2611]: time="2025-10-13T05:37:22.917045006Z" level=info msg="StartContainer for \"3d24ae227f614da2978b2c6566c3714be692baba5a9d682ad1ece84e2a224ae8\" returns successfully" Oct 13 05:37:23.062250 kubelet[4082]: I1013 05:37:23.062109 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-679f59f4c7-nzrgh" podStartSLOduration=29.065009533 podStartE2EDuration="35.062092476s" podCreationTimestamp="2025-10-13 05:36:48 +0000 UTC" firstStartedPulling="2025-10-13 05:37:16.384217412 +0000 UTC m=+43.694647363" lastFinishedPulling="2025-10-13 05:37:22.381300363 +0000 UTC m=+49.691730306" observedRunningTime="2025-10-13 05:37:23.042767685 +0000 UTC m=+50.353197641" watchObservedRunningTime="2025-10-13 05:37:23.062092476 +0000 UTC m=+50.372522440" Oct 13 05:37:23.063477 kubelet[4082]: I1013 05:37:23.062837 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-679f59f4c7-gm94f" podStartSLOduration=29.413796505 podStartE2EDuration="35.062814749s" podCreationTimestamp="2025-10-13 05:36:48 +0000 UTC" firstStartedPulling="2025-10-13 05:37:17.133903839 +0000 UTC m=+44.444333795" lastFinishedPulling="2025-10-13 05:37:22.782922086 +0000 UTC m=+50.093352039" observedRunningTime="2025-10-13 05:37:23.062347828 +0000 UTC m=+50.372777785" watchObservedRunningTime="2025-10-13 05:37:23.062814749 +0000 UTC m=+50.373244706" Oct 13 05:37:26.484027 containerd[2611]: time="2025-10-13T05:37:26.483981310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:26.488696 containerd[2611]: time="2025-10-13T05:37:26.488622406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:37:26.497252 containerd[2611]: time="2025-10-13T05:37:26.496585691Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:26.502806 containerd[2611]: time="2025-10-13T05:37:26.502768977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:26.503123 containerd[2611]: time="2025-10-13T05:37:26.503082076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.719076235s" Oct 13 05:37:26.503123 containerd[2611]: time="2025-10-13T05:37:26.503112323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:37:26.505003 containerd[2611]: time="2025-10-13T05:37:26.504982534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:37:26.530735 containerd[2611]: time="2025-10-13T05:37:26.530711915Z" level=info msg="CreateContainer within sandbox \"5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:37:26.549397 containerd[2611]: time="2025-10-13T05:37:26.549373355Z" level=info msg="Container 135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:26.552555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969976850.mount: Deactivated successfully. Oct 13 05:37:26.575781 containerd[2611]: time="2025-10-13T05:37:26.575755780Z" level=info msg="CreateContainer within sandbox \"5da164f35e86f953074b97df41b5adc85388e79fab20826016b937c5bd7f2577\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\"" Oct 13 05:37:26.576450 containerd[2611]: time="2025-10-13T05:37:26.576359595Z" level=info msg="StartContainer for \"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\"" Oct 13 05:37:26.577549 containerd[2611]: time="2025-10-13T05:37:26.577516372Z" level=info msg="connecting to shim 135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac" address="unix:///run/containerd/s/2aef21f262243496fc2171894f199afc21cc56374800522f36c26d3e180b913f" protocol=ttrpc version=3 Oct 13 05:37:26.606343 systemd[1]: Started cri-containerd-135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac.scope - libcontainer container 135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac. Oct 13 05:37:26.651687 containerd[2611]: time="2025-10-13T05:37:26.651663959Z" level=info msg="StartContainer for \"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\" returns successfully" Oct 13 05:37:27.088536 containerd[2611]: time="2025-10-13T05:37:27.088456401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\" id:\"d05e2b7e7be7020be43aecff226d6d9f2571ba6f66d793563c9c4be0f6ef6853\" pid:6397 exited_at:{seconds:1760333847 nanos:87409668}" Oct 13 05:37:27.103677 kubelet[4082]: I1013 05:37:27.103615 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86b98dd68c-2ngs9" podStartSLOduration=27.774234368 podStartE2EDuration="36.103596676s" podCreationTimestamp="2025-10-13 05:36:51 +0000 UTC" firstStartedPulling="2025-10-13 05:37:18.17463408 +0000 UTC m=+45.485064039" lastFinishedPulling="2025-10-13 05:37:26.503996401 +0000 UTC m=+53.814426347" observedRunningTime="2025-10-13 05:37:27.058982007 +0000 UTC m=+54.369411958" watchObservedRunningTime="2025-10-13 05:37:27.103596676 +0000 UTC m=+54.414026628" Oct 13 05:37:27.831725 containerd[2611]: time="2025-10-13T05:37:27.831680378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:27.834225 containerd[2611]: time="2025-10-13T05:37:27.834099411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:37:27.870035 containerd[2611]: time="2025-10-13T05:37:27.869151606Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:27.873155 containerd[2611]: time="2025-10-13T05:37:27.873127671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:27.873768 containerd[2611]: time="2025-10-13T05:37:27.873744366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.368624743s" Oct 13 05:37:27.873847 containerd[2611]: time="2025-10-13T05:37:27.873835918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:37:27.880427 containerd[2611]: time="2025-10-13T05:37:27.880400446Z" level=info msg="CreateContainer within sandbox \"f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:37:27.901225 containerd[2611]: time="2025-10-13T05:37:27.899323397Z" level=info msg="Container 34be7a74f5fb2d77b8faced60684703f21d4431cd055c416710296ee8fcc1a3d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:27.970582 containerd[2611]: time="2025-10-13T05:37:27.970555819Z" level=info msg="CreateContainer within sandbox \"f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"34be7a74f5fb2d77b8faced60684703f21d4431cd055c416710296ee8fcc1a3d\"" Oct 13 05:37:27.970957 containerd[2611]: time="2025-10-13T05:37:27.970918447Z" level=info msg="StartContainer for \"34be7a74f5fb2d77b8faced60684703f21d4431cd055c416710296ee8fcc1a3d\"" Oct 13 05:37:27.972603 containerd[2611]: time="2025-10-13T05:37:27.972484665Z" level=info msg="connecting to shim 34be7a74f5fb2d77b8faced60684703f21d4431cd055c416710296ee8fcc1a3d" address="unix:///run/containerd/s/dc53e1b8eb6b710a7ac2496b333df1dd8fa91a80647b6685df6daf4ee9021e4a" protocol=ttrpc version=3 Oct 13 05:37:27.996372 systemd[1]: Started cri-containerd-34be7a74f5fb2d77b8faced60684703f21d4431cd055c416710296ee8fcc1a3d.scope - libcontainer container 34be7a74f5fb2d77b8faced60684703f21d4431cd055c416710296ee8fcc1a3d. Oct 13 05:37:28.026831 containerd[2611]: time="2025-10-13T05:37:28.026807979Z" level=info msg="StartContainer for \"34be7a74f5fb2d77b8faced60684703f21d4431cd055c416710296ee8fcc1a3d\" returns successfully" Oct 13 05:37:28.027758 containerd[2611]: time="2025-10-13T05:37:28.027738919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:37:29.663903 containerd[2611]: time="2025-10-13T05:37:29.663854227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:29.669291 containerd[2611]: time="2025-10-13T05:37:29.669257954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:37:29.673414 containerd[2611]: time="2025-10-13T05:37:29.673364556Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:29.677794 containerd[2611]: time="2025-10-13T05:37:29.677733869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:29.678315 containerd[2611]: time="2025-10-13T05:37:29.678287778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.650393904s" Oct 13 05:37:29.678358 containerd[2611]: time="2025-10-13T05:37:29.678320360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:37:29.686628 containerd[2611]: time="2025-10-13T05:37:29.686596375Z" level=info msg="CreateContainer within sandbox \"f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:37:29.709824 containerd[2611]: time="2025-10-13T05:37:29.707286719Z" level=info msg="Container a9972b4b1bb1a707e3a0d1453c2b78ec1ef3ca2eb34ad76f5e73a3aa98c3d339: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:29.714606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205183086.mount: Deactivated successfully. Oct 13 05:37:29.734395 containerd[2611]: time="2025-10-13T05:37:29.734368297Z" level=info msg="CreateContainer within sandbox \"f19182dc2417b742e2e159ae3f7a6e2534f843462193f1d8cd9b368d2b510b87\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a9972b4b1bb1a707e3a0d1453c2b78ec1ef3ca2eb34ad76f5e73a3aa98c3d339\"" Oct 13 05:37:29.735273 containerd[2611]: time="2025-10-13T05:37:29.735250473Z" level=info msg="StartContainer for \"a9972b4b1bb1a707e3a0d1453c2b78ec1ef3ca2eb34ad76f5e73a3aa98c3d339\"" Oct 13 05:37:29.737720 containerd[2611]: time="2025-10-13T05:37:29.737671436Z" level=info msg="connecting to shim a9972b4b1bb1a707e3a0d1453c2b78ec1ef3ca2eb34ad76f5e73a3aa98c3d339" address="unix:///run/containerd/s/dc53e1b8eb6b710a7ac2496b333df1dd8fa91a80647b6685df6daf4ee9021e4a" protocol=ttrpc version=3 Oct 13 05:37:29.763348 systemd[1]: Started cri-containerd-a9972b4b1bb1a707e3a0d1453c2b78ec1ef3ca2eb34ad76f5e73a3aa98c3d339.scope - libcontainer container a9972b4b1bb1a707e3a0d1453c2b78ec1ef3ca2eb34ad76f5e73a3aa98c3d339. Oct 13 05:37:29.799620 containerd[2611]: time="2025-10-13T05:37:29.799597309Z" level=info msg="StartContainer for \"a9972b4b1bb1a707e3a0d1453c2b78ec1ef3ca2eb34ad76f5e73a3aa98c3d339\" returns successfully" Oct 13 05:37:29.867788 kubelet[4082]: I1013 05:37:29.867760 4082 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:37:29.868094 kubelet[4082]: I1013 05:37:29.867796 4082 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:37:31.077758 containerd[2611]: time="2025-10-13T05:37:31.077712951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"11c3c2f2b980cd7eeab241d870de49d7f3f9b31da8dfd8c170edb69051790f01\" pid:6490 exited_at:{seconds:1760333851 nanos:77324253}" Oct 13 05:37:42.025171 containerd[2611]: time="2025-10-13T05:37:42.024946513Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\" id:\"6727de02cdf41131e0e48fd76893e3aab35ac1766109e5f69b0aeda3c16e1b59\" pid:6527 exited_at:{seconds:1760333862 nanos:24646974}" Oct 13 05:37:42.047218 kubelet[4082]: I1013 05:37:42.046991 4082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zl54c" podStartSLOduration=41.464751595 podStartE2EDuration="51.046974648s" podCreationTimestamp="2025-10-13 05:36:51 +0000 UTC" firstStartedPulling="2025-10-13 05:37:20.096725098 +0000 UTC m=+47.407155049" lastFinishedPulling="2025-10-13 05:37:29.678948155 +0000 UTC m=+56.989378102" observedRunningTime="2025-10-13 05:37:30.066058236 +0000 UTC m=+57.376488189" watchObservedRunningTime="2025-10-13 05:37:42.046974648 +0000 UTC m=+69.357404603" Oct 13 05:37:42.095965 containerd[2611]: time="2025-10-13T05:37:42.095883795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\" id:\"4028257e626d11f25d100bb15490b49b70f5588a99d859c5cc12c242b9793511\" pid:6550 exited_at:{seconds:1760333862 nanos:95666439}" Oct 13 05:37:52.221051 containerd[2611]: time="2025-10-13T05:37:52.220951107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"ef92e2a64a093f3cdff0bbd507d748bd80fa0b78acfef22d3d9f7c8e67d4f25b\" pid:6574 exited_at:{seconds:1760333872 nanos:220307346}" Oct 13 05:37:57.079365 containerd[2611]: time="2025-10-13T05:37:57.079281560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\" id:\"9ecf2e6246466071fbf9f1c99ff3145aec12c30c289085405a143a76770c6397\" pid:6605 exited_at:{seconds:1760333877 nanos:78804957}" Oct 13 05:38:06.994095 containerd[2611]: time="2025-10-13T05:38:06.994036216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\" id:\"d01ed4b61399f12b86c31eb697475adc6bd65b579d4e359606f543951dcde607\" pid:6629 exited_at:{seconds:1760333886 nanos:993655168}" Oct 13 05:38:12.095889 containerd[2611]: time="2025-10-13T05:38:12.095826522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\" id:\"a567fac8aa96ecf968f3dea93347cb901ba09955107ef32a78830d58baf42d6f\" pid:6652 exited_at:{seconds:1760333892 nanos:95566150}" Oct 13 05:38:22.352853 containerd[2611]: time="2025-10-13T05:38:22.352755541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"0c0229494b6f03decf0416f2f0f25b5e9de058e822d19d00c035625c82c23c86\" pid:6675 exited_at:{seconds:1760333902 nanos:352404238}" Oct 13 05:38:22.643148 systemd[1]: Started sshd@7-10.200.8.43:22-10.200.16.10:56874.service - OpenSSH per-connection server daemon (10.200.16.10:56874). Oct 13 05:38:23.296672 sshd[6690]: Accepted publickey for core from 10.200.16.10 port 56874 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:23.297136 sshd-session[6690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:23.301139 systemd-logind[2582]: New session 10 of user core. Oct 13 05:38:23.307373 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:38:23.884031 sshd[6693]: Connection closed by 10.200.16.10 port 56874 Oct 13 05:38:23.885573 sshd-session[6690]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:23.889166 systemd-logind[2582]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:38:23.892494 systemd[1]: sshd@7-10.200.8.43:22-10.200.16.10:56874.service: Deactivated successfully. Oct 13 05:38:23.896396 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:38:23.899719 systemd-logind[2582]: Removed session 10. Oct 13 05:38:27.114155 containerd[2611]: time="2025-10-13T05:38:27.114109111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\" id:\"f975295ede3fadb85059895ac184f67def263c2bdd57bdbafdd21053cab594bb\" pid:6719 exited_at:{seconds:1760333907 nanos:113163690}" Oct 13 05:38:29.002965 systemd[1]: Started sshd@8-10.200.8.43:22-10.200.16.10:56886.service - OpenSSH per-connection server daemon (10.200.16.10:56886). Oct 13 05:38:29.651709 sshd[6729]: Accepted publickey for core from 10.200.16.10 port 56886 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:29.652855 sshd-session[6729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:29.657363 systemd-logind[2582]: New session 11 of user core. Oct 13 05:38:29.664333 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:38:30.221635 sshd[6732]: Connection closed by 10.200.16.10 port 56886 Oct 13 05:38:30.223412 sshd-session[6729]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:30.227722 systemd[1]: sshd@8-10.200.8.43:22-10.200.16.10:56886.service: Deactivated successfully. Oct 13 05:38:30.228013 systemd-logind[2582]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:38:30.231920 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:38:30.235174 systemd-logind[2582]: Removed session 11. Oct 13 05:38:31.083888 containerd[2611]: time="2025-10-13T05:38:31.083842271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"f7a42277a05f52823d45d6badc75aca41a0d55ebc20b8d67ba11e65199ccbb86\" pid:6756 exited_at:{seconds:1760333911 nanos:83579241}" Oct 13 05:38:35.336991 systemd[1]: Started sshd@9-10.200.8.43:22-10.200.16.10:39832.service - OpenSSH per-connection server daemon (10.200.16.10:39832). Oct 13 05:38:35.988175 sshd[6776]: Accepted publickey for core from 10.200.16.10 port 39832 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:35.989378 sshd-session[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:35.993027 systemd-logind[2582]: New session 12 of user core. Oct 13 05:38:35.997364 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:38:36.489579 sshd[6779]: Connection closed by 10.200.16.10 port 39832 Oct 13 05:38:36.490125 sshd-session[6776]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:36.493630 systemd[1]: sshd@9-10.200.8.43:22-10.200.16.10:39832.service: Deactivated successfully. Oct 13 05:38:36.495674 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:38:36.496538 systemd-logind[2582]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:38:36.497737 systemd-logind[2582]: Removed session 12. Oct 13 05:38:36.603347 systemd[1]: Started sshd@10-10.200.8.43:22-10.200.16.10:39842.service - OpenSSH per-connection server daemon (10.200.16.10:39842). Oct 13 05:38:37.254631 sshd[6792]: Accepted publickey for core from 10.200.16.10 port 39842 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:37.255886 sshd-session[6792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:37.260478 systemd-logind[2582]: New session 13 of user core. Oct 13 05:38:37.266455 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:38:37.785924 sshd[6795]: Connection closed by 10.200.16.10 port 39842 Oct 13 05:38:37.786484 sshd-session[6792]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:37.789963 systemd[1]: sshd@10-10.200.8.43:22-10.200.16.10:39842.service: Deactivated successfully. Oct 13 05:38:37.792111 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:38:37.793340 systemd-logind[2582]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:38:37.794589 systemd-logind[2582]: Removed session 13. Oct 13 05:38:37.898792 systemd[1]: Started sshd@11-10.200.8.43:22-10.200.16.10:39844.service - OpenSSH per-connection server daemon (10.200.16.10:39844). Oct 13 05:38:38.542936 sshd[6804]: Accepted publickey for core from 10.200.16.10 port 39844 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:38.544562 sshd-session[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:38.550734 systemd-logind[2582]: New session 14 of user core. Oct 13 05:38:38.556400 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:38:39.040306 sshd[6810]: Connection closed by 10.200.16.10 port 39844 Oct 13 05:38:39.040835 sshd-session[6804]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:39.043840 systemd[1]: sshd@11-10.200.8.43:22-10.200.16.10:39844.service: Deactivated successfully. Oct 13 05:38:39.045793 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:38:39.047470 systemd-logind[2582]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:38:39.048119 systemd-logind[2582]: Removed session 14. Oct 13 05:38:42.088642 containerd[2611]: time="2025-10-13T05:38:42.088596505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\" id:\"a6345fbd9e43d141d40a8ef36b72ec6c7f67867024568eb20cfec2e3b2de7990\" pid:6838 exit_status:1 exited_at:{seconds:1760333922 nanos:88356246}" Oct 13 05:38:44.156679 systemd[1]: Started sshd@12-10.200.8.43:22-10.200.16.10:56238.service - OpenSSH per-connection server daemon (10.200.16.10:56238). Oct 13 05:38:50.278885 sshd[6850]: Accepted publickey for core from 10.200.16.10 port 56238 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:50.279255 sshd-session[6850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:50.284250 systemd-logind[2582]: New session 15 of user core. Oct 13 05:38:50.289374 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:38:50.696279 sshd[6867]: Connection closed by 10.200.16.10 port 56238 Oct 13 05:38:50.696885 sshd-session[6850]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:50.700415 systemd[1]: sshd@12-10.200.8.43:22-10.200.16.10:56238.service: Deactivated successfully. Oct 13 05:38:50.702297 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:38:50.703053 systemd-logind[2582]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:38:50.704360 systemd-logind[2582]: Removed session 15. Oct 13 05:38:52.083874 containerd[2611]: time="2025-10-13T05:38:52.083748899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"8b398f91f94286697a6cf499c1118a2789a9d6685e0f4f6ae6e54bcb0d3c3ae2\" pid:6898 exited_at:{seconds:1760333932 nanos:83294443}" Oct 13 05:38:55.810605 systemd[1]: Started sshd@13-10.200.8.43:22-10.200.16.10:58430.service - OpenSSH per-connection server daemon (10.200.16.10:58430). Oct 13 05:38:56.458738 sshd[6909]: Accepted publickey for core from 10.200.16.10 port 58430 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:56.459922 sshd-session[6909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:56.464267 systemd-logind[2582]: New session 16 of user core. Oct 13 05:38:56.469343 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:38:56.959069 sshd[6912]: Connection closed by 10.200.16.10 port 58430 Oct 13 05:38:56.959642 sshd-session[6909]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:56.963322 systemd[1]: sshd@13-10.200.8.43:22-10.200.16.10:58430.service: Deactivated successfully. Oct 13 05:38:56.965291 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:38:56.966280 systemd-logind[2582]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:38:56.967390 systemd-logind[2582]: Removed session 16. Oct 13 05:38:57.079571 containerd[2611]: time="2025-10-13T05:38:57.079524918Z" level=info msg="TaskExit event in podsandbox handler container_id:\"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\" id:\"3116c51e564641105021a1787e55bbcad30e0739b3194e70f64f7db95ce4ac56\" pid:6935 exited_at:{seconds:1760333937 nanos:79302194}" Oct 13 05:39:02.079534 systemd[1]: Started sshd@14-10.200.8.43:22-10.200.16.10:54232.service - OpenSSH per-connection server daemon (10.200.16.10:54232). Oct 13 05:39:02.739054 sshd[6947]: Accepted publickey for core from 10.200.16.10 port 54232 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:02.740108 sshd-session[6947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:02.744508 systemd-logind[2582]: New session 17 of user core. Oct 13 05:39:02.749363 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:39:03.248976 sshd[6950]: Connection closed by 10.200.16.10 port 54232 Oct 13 05:39:03.249540 sshd-session[6947]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:03.254502 systemd-logind[2582]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:39:03.254886 systemd[1]: sshd@14-10.200.8.43:22-10.200.16.10:54232.service: Deactivated successfully. Oct 13 05:39:03.257545 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:39:03.260907 systemd-logind[2582]: Removed session 17. Oct 13 05:39:03.363996 systemd[1]: Started sshd@15-10.200.8.43:22-10.200.16.10:54238.service - OpenSSH per-connection server daemon (10.200.16.10:54238). Oct 13 05:39:04.015282 sshd[6961]: Accepted publickey for core from 10.200.16.10 port 54238 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:04.017746 sshd-session[6961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:04.026859 systemd-logind[2582]: New session 18 of user core. Oct 13 05:39:04.031388 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:39:04.584653 sshd[6964]: Connection closed by 10.200.16.10 port 54238 Oct 13 05:39:04.585264 sshd-session[6961]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:04.588570 systemd[1]: sshd@15-10.200.8.43:22-10.200.16.10:54238.service: Deactivated successfully. Oct 13 05:39:04.590629 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:39:04.591763 systemd-logind[2582]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:39:04.592820 systemd-logind[2582]: Removed session 18. Oct 13 05:39:04.709435 systemd[1]: Started sshd@16-10.200.8.43:22-10.200.16.10:54246.service - OpenSSH per-connection server daemon (10.200.16.10:54246). Oct 13 05:39:05.353922 sshd[6974]: Accepted publickey for core from 10.200.16.10 port 54246 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:05.355074 sshd-session[6974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:05.359443 systemd-logind[2582]: New session 19 of user core. Oct 13 05:39:05.367342 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:39:06.537330 sshd[6977]: Connection closed by 10.200.16.10 port 54246 Oct 13 05:39:06.654567 systemd[1]: Started sshd@17-10.200.8.43:22-10.200.16.10:54248.service - OpenSSH per-connection server daemon (10.200.16.10:54248). Oct 13 05:39:06.757468 sshd-session[6974]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:06.761025 systemd-logind[2582]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:39:06.761135 systemd[1]: sshd@16-10.200.8.43:22-10.200.16.10:54246.service: Deactivated successfully. Oct 13 05:39:06.762983 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:39:06.764698 systemd-logind[2582]: Removed session 19. Oct 13 05:39:06.889958 containerd[2611]: time="2025-10-13T05:39:06.889677424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\" id:\"eef12a38e792546fc7431cfaf16d180eee53932fa6b7a79c502566c93945d9cf\" pid:7009 exited_at:{seconds:1760333946 nanos:889431070}" Oct 13 05:39:07.299528 sshd[6991]: Accepted publickey for core from 10.200.16.10 port 54248 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:07.300603 sshd-session[6991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:07.305319 systemd-logind[2582]: New session 20 of user core. Oct 13 05:39:07.313370 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:39:07.898025 sshd[7018]: Connection closed by 10.200.16.10 port 54248 Oct 13 05:39:07.898631 sshd-session[6991]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:07.902083 systemd[1]: sshd@17-10.200.8.43:22-10.200.16.10:54248.service: Deactivated successfully. Oct 13 05:39:07.903964 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:39:07.904883 systemd-logind[2582]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:39:07.906196 systemd-logind[2582]: Removed session 20. Oct 13 05:39:08.010545 systemd[1]: Started sshd@18-10.200.8.43:22-10.200.16.10:54260.service - OpenSSH per-connection server daemon (10.200.16.10:54260). Oct 13 05:39:08.657750 sshd[7027]: Accepted publickey for core from 10.200.16.10 port 54260 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:08.658970 sshd-session[7027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:08.663696 systemd-logind[2582]: New session 21 of user core. Oct 13 05:39:08.668385 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:39:09.159335 sshd[7030]: Connection closed by 10.200.16.10 port 54260 Oct 13 05:39:09.159963 sshd-session[7027]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:09.163484 systemd[1]: sshd@18-10.200.8.43:22-10.200.16.10:54260.service: Deactivated successfully. Oct 13 05:39:09.165392 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:39:09.166241 systemd-logind[2582]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:39:09.167671 systemd-logind[2582]: Removed session 21. Oct 13 05:39:12.120825 containerd[2611]: time="2025-10-13T05:39:12.120600232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\" id:\"25f24b2846801ca18988b5fdfe8a07a552a7402ccb5f54f0ea2b5ca5fc76ab20\" pid:7056 exited_at:{seconds:1760333952 nanos:120360362}" Oct 13 05:39:14.277499 systemd[1]: Started sshd@19-10.200.8.43:22-10.200.16.10:34652.service - OpenSSH per-connection server daemon (10.200.16.10:34652). Oct 13 05:39:14.935774 sshd[7070]: Accepted publickey for core from 10.200.16.10 port 34652 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:14.936903 sshd-session[7070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:14.941356 systemd-logind[2582]: New session 22 of user core. Oct 13 05:39:14.947410 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:39:15.462299 sshd[7073]: Connection closed by 10.200.16.10 port 34652 Oct 13 05:39:15.465398 sshd-session[7070]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:15.470104 systemd-logind[2582]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:39:15.471457 systemd[1]: sshd@19-10.200.8.43:22-10.200.16.10:34652.service: Deactivated successfully. Oct 13 05:39:15.474679 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:39:15.477644 systemd-logind[2582]: Removed session 22. Oct 13 05:39:20.578525 systemd[1]: Started sshd@20-10.200.8.43:22-10.200.16.10:39442.service - OpenSSH per-connection server daemon (10.200.16.10:39442). Oct 13 05:39:21.223442 sshd[7085]: Accepted publickey for core from 10.200.16.10 port 39442 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:21.224610 sshd-session[7085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:21.229230 systemd-logind[2582]: New session 23 of user core. Oct 13 05:39:21.234331 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:39:21.719524 sshd[7088]: Connection closed by 10.200.16.10 port 39442 Oct 13 05:39:21.720072 sshd-session[7085]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:21.723433 systemd[1]: sshd@20-10.200.8.43:22-10.200.16.10:39442.service: Deactivated successfully. Oct 13 05:39:21.725150 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:39:21.726235 systemd-logind[2582]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:39:21.727318 systemd-logind[2582]: Removed session 23. Oct 13 05:39:22.088179 containerd[2611]: time="2025-10-13T05:39:22.087847400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"9cdbc7f57b8ddabf6e7efd553294526d2d562e2cbc59b0c1687193f4ea3941de\" pid:7111 exited_at:{seconds:1760333962 nanos:87441589}" Oct 13 05:39:26.838538 systemd[1]: Started sshd@21-10.200.8.43:22-10.200.16.10:39444.service - OpenSSH per-connection server daemon (10.200.16.10:39444). Oct 13 05:39:27.080660 containerd[2611]: time="2025-10-13T05:39:27.080614362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"135691c9cb1d686383fd0cc3dc3965a39707ad8c7a3044ca73e46a162bee55ac\" id:\"02be6ccc8974d4e5320ad75345901bbbce392a18ec14efe4ece94c7ff32c4656\" pid:7138 exited_at:{seconds:1760333967 nanos:80421004}" Oct 13 05:39:27.482866 sshd[7123]: Accepted publickey for core from 10.200.16.10 port 39444 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:27.484037 sshd-session[7123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:27.488312 systemd-logind[2582]: New session 24 of user core. Oct 13 05:39:27.494297 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:39:27.983359 sshd[7146]: Connection closed by 10.200.16.10 port 39444 Oct 13 05:39:27.983639 sshd-session[7123]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:27.986727 systemd[1]: sshd@21-10.200.8.43:22-10.200.16.10:39444.service: Deactivated successfully. Oct 13 05:39:27.989054 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:39:27.990339 systemd-logind[2582]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:39:27.992103 systemd-logind[2582]: Removed session 24. Oct 13 05:39:31.077729 containerd[2611]: time="2025-10-13T05:39:31.077677155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dec742f1ce40706b4e3932ddd2bf9dd29b606c74e96d6c9bec4960ffda6f88\" id:\"a94dfd2e563f45a5c140c07df70ce848144a0d2eee8173762512ddca488ccc0e\" pid:7169 exited_at:{seconds:1760333971 nanos:77403242}" Oct 13 05:39:33.104916 systemd[1]: Started sshd@22-10.200.8.43:22-10.200.16.10:39238.service - OpenSSH per-connection server daemon (10.200.16.10:39238). Oct 13 05:39:33.759233 sshd[7183]: Accepted publickey for core from 10.200.16.10 port 39238 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:33.759787 sshd-session[7183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:33.764735 systemd-logind[2582]: New session 25 of user core. Oct 13 05:39:33.770353 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:39:34.271249 sshd[7186]: Connection closed by 10.200.16.10 port 39238 Oct 13 05:39:34.272408 sshd-session[7183]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:34.276311 systemd-logind[2582]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:39:34.276574 systemd[1]: sshd@22-10.200.8.43:22-10.200.16.10:39238.service: Deactivated successfully. Oct 13 05:39:34.278552 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:39:34.280188 systemd-logind[2582]: Removed session 25. Oct 13 05:39:39.385623 systemd[1]: Started sshd@23-10.200.8.43:22-10.200.16.10:39242.service - OpenSSH per-connection server daemon (10.200.16.10:39242). Oct 13 05:39:40.035737 sshd[7198]: Accepted publickey for core from 10.200.16.10 port 39242 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:40.037689 sshd-session[7198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:40.043011 systemd-logind[2582]: New session 26 of user core. Oct 13 05:39:40.047361 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 05:39:40.536268 sshd[7203]: Connection closed by 10.200.16.10 port 39242 Oct 13 05:39:40.536839 sshd-session[7198]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:40.540122 systemd[1]: sshd@23-10.200.8.43:22-10.200.16.10:39242.service: Deactivated successfully. Oct 13 05:39:40.542045 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 05:39:40.543269 systemd-logind[2582]: Session 26 logged out. Waiting for processes to exit. Oct 13 05:39:40.544345 systemd-logind[2582]: Removed session 26. Oct 13 05:39:42.089064 containerd[2611]: time="2025-10-13T05:39:42.088966559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cc458a2b30a82d6d694328048f9dbbaae0f45f06359fbee2a9995a06610ebf\" id:\"b4ade8e0169aefb639a707f8653bad743435ee3f7cfe9d994d1a7dca2f6eb8e8\" pid:7226 exited_at:{seconds:1760333982 nanos:88587749}"