Oct 13 05:34:50.501594 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:34:50.501624 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:34:50.501635 kernel: BIOS-provided physical RAM map: Oct 13 05:34:50.501643 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:34:50.501650 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Oct 13 05:34:50.501658 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Oct 13 05:34:50.501668 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Oct 13 05:34:50.501676 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Oct 13 05:34:50.501683 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Oct 13 05:34:50.501690 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Oct 13 05:34:50.501698 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Oct 13 05:34:50.501705 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Oct 13 05:34:50.501713 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Oct 13 05:34:50.501720 kernel: printk: legacy bootconsole [earlyser0] enabled Oct 13 05:34:50.501732 kernel: NX (Execute Disable) protection: active Oct 13 05:34:50.501739 kernel: APIC: Static calls initialized Oct 13 05:34:50.501747 kernel: efi: EFI v2.7 by Microsoft Oct 13 05:34:50.501755 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f437518 RNG=0x3ffd2018 Oct 13 05:34:50.501763 kernel: random: crng init done Oct 13 05:34:50.501771 kernel: secureboot: Secure boot disabled Oct 13 05:34:50.501780 kernel: SMBIOS 3.1.0 present. Oct 13 05:34:50.501789 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Oct 13 05:34:50.501797 kernel: DMI: Memory slots populated: 2/2 Oct 13 05:34:50.501804 kernel: Hypervisor detected: Microsoft Hyper-V Oct 13 05:34:50.501812 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Oct 13 05:34:50.501820 kernel: Hyper-V: Nested features: 0x3e0101 Oct 13 05:34:50.501828 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Oct 13 05:34:50.501835 kernel: Hyper-V: Using hypercall for remote TLB flush Oct 13 05:34:50.501843 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 13 05:34:50.501851 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 13 05:34:50.501859 kernel: tsc: Detected 2299.998 MHz processor Oct 13 05:34:50.501868 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:34:50.501877 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:34:50.501886 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Oct 13 05:34:50.501895 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:34:50.501904 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:34:50.501913 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Oct 13 05:34:50.501921 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Oct 13 05:34:50.501930 kernel: Using GB pages for direct mapping Oct 13 05:34:50.501939 kernel: ACPI: Early table checksum verification disabled Oct 13 05:34:50.501951 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Oct 13 05:34:50.501959 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:50.501969 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:50.501979 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Oct 13 05:34:50.501988 kernel: ACPI: FACS 0x000000003FFFE000 000040 Oct 13 05:34:50.501997 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:50.502005 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:50.502014 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:50.502022 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Oct 13 05:34:50.502032 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Oct 13 05:34:50.502041 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:34:50.502050 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Oct 13 05:34:50.502059 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Oct 13 05:34:50.502067 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Oct 13 05:34:50.502076 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Oct 13 05:34:50.502085 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Oct 13 05:34:50.502095 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Oct 13 05:34:50.502104 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Oct 13 05:34:50.502112 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Oct 13 05:34:50.502120 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Oct 13 05:34:50.502129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Oct 13 05:34:50.502138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Oct 13 05:34:50.502147 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Oct 13 05:34:50.502158 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Oct 13 05:34:50.502166 kernel: Zone ranges: Oct 13 05:34:50.502175 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:34:50.502183 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 13 05:34:50.502192 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Oct 13 05:34:50.502200 kernel: Device empty Oct 13 05:34:50.502208 kernel: Movable zone start for each node Oct 13 05:34:50.502217 kernel: Early memory node ranges Oct 13 05:34:50.502228 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 13 05:34:50.502237 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Oct 13 05:34:50.502245 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Oct 13 05:34:50.502254 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Oct 13 05:34:50.502262 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Oct 13 05:34:50.502270 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Oct 13 05:34:50.502279 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:34:50.502289 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 13 05:34:50.502298 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Oct 13 05:34:50.502307 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Oct 13 05:34:50.502315 kernel: ACPI: PM-Timer IO Port: 0x408 Oct 13 05:34:50.502324 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:34:50.502333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:34:50.502342 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:34:50.502352 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Oct 13 05:34:50.502361 kernel: TSC deadline timer available Oct 13 05:34:50.502380 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:34:50.502389 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:34:50.502397 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:34:50.502406 kernel: CPU topo: Max. threads per core: 2 Oct 13 05:34:50.502414 kernel: CPU topo: Num. cores per package: 1 Oct 13 05:34:50.502425 kernel: CPU topo: Num. threads per package: 2 Oct 13 05:34:50.502434 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 13 05:34:50.502445 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Oct 13 05:34:50.502453 kernel: Booting paravirtualized kernel on Hyper-V Oct 13 05:34:50.502463 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:34:50.502472 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 13 05:34:50.502482 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 13 05:34:50.502495 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 13 05:34:50.502506 kernel: pcpu-alloc: [0] 0 1 Oct 13 05:34:50.502514 kernel: Hyper-V: PV spinlocks enabled Oct 13 05:34:50.502523 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:34:50.502534 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:34:50.502543 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:34:50.502553 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 13 05:34:50.502564 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:34:50.502574 kernel: Fallback order for Node 0: 0 Oct 13 05:34:50.502583 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Oct 13 05:34:50.502592 kernel: Policy zone: Normal Oct 13 05:34:50.502601 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:34:50.502609 kernel: software IO TLB: area num 2. Oct 13 05:34:50.502619 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 05:34:50.502631 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:34:50.502640 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:34:50.502649 kernel: Dynamic Preempt: voluntary Oct 13 05:34:50.502659 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:34:50.502670 kernel: rcu: RCU event tracing is enabled. Oct 13 05:34:50.502690 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 05:34:50.502703 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:34:50.502713 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:34:50.502722 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:34:50.502735 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:34:50.502744 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 05:34:50.502754 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:34:50.502764 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:34:50.502773 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:34:50.502783 kernel: Using NULL legacy PIC Oct 13 05:34:50.502796 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Oct 13 05:34:50.502806 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:34:50.502816 kernel: Console: colour dummy device 80x25 Oct 13 05:34:50.502826 kernel: printk: legacy console [tty1] enabled Oct 13 05:34:50.502835 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:34:50.502847 kernel: printk: legacy bootconsole [earlyser0] disabled Oct 13 05:34:50.502858 kernel: ACPI: Core revision 20240827 Oct 13 05:34:50.502867 kernel: Failed to register legacy timer interrupt Oct 13 05:34:50.502877 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:34:50.502886 kernel: x2apic enabled Oct 13 05:34:50.502895 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:34:50.502905 kernel: Hyper-V: Host Build 10.0.26100.1381-1-0 Oct 13 05:34:50.502917 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 13 05:34:50.502927 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Oct 13 05:34:50.502938 kernel: Hyper-V: Using IPI hypercalls Oct 13 05:34:50.502948 kernel: APIC: send_IPI() replaced with hv_send_ipi() Oct 13 05:34:50.502958 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Oct 13 05:34:50.502967 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Oct 13 05:34:50.502978 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Oct 13 05:34:50.502990 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Oct 13 05:34:50.503000 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Oct 13 05:34:50.503010 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 13 05:34:50.503019 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Oct 13 05:34:50.503028 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:34:50.503038 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 13 05:34:50.503048 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 13 05:34:50.503058 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:34:50.503069 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:34:50.503078 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:34:50.503088 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 13 05:34:50.503098 kernel: RETBleed: Vulnerable Oct 13 05:34:50.503107 kernel: Speculative Store Bypass: Vulnerable Oct 13 05:34:50.503116 kernel: active return thunk: its_return_thunk Oct 13 05:34:50.503126 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 13 05:34:50.503135 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:34:50.503145 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:34:50.503154 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:34:50.503166 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 13 05:34:50.503175 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 13 05:34:50.503185 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 13 05:34:50.503195 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Oct 13 05:34:50.503204 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Oct 13 05:34:50.503214 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Oct 13 05:34:50.503223 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:34:50.503233 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 13 05:34:50.503242 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 13 05:34:50.503251 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 13 05:34:50.503263 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Oct 13 05:34:50.503272 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Oct 13 05:34:50.503282 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Oct 13 05:34:50.503291 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Oct 13 05:34:50.503301 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:34:50.503310 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:34:50.503320 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:34:50.503329 kernel: landlock: Up and running. Oct 13 05:34:50.503339 kernel: SELinux: Initializing. Oct 13 05:34:50.503348 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:34:50.503358 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:34:50.503379 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Oct 13 05:34:50.503390 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Oct 13 05:34:50.503399 kernel: signal: max sigframe size: 11952 Oct 13 05:34:50.503410 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:34:50.503420 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:34:50.503430 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:34:50.503439 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 13 05:34:50.503449 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:34:50.503461 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:34:50.503471 kernel: .... node #0, CPUs: #1 Oct 13 05:34:50.503481 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 05:34:50.503490 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 13 05:34:50.503500 kernel: Memory: 8108748K/8383228K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 269268K reserved, 0K cma-reserved) Oct 13 05:34:50.503509 kernel: devtmpfs: initialized Oct 13 05:34:50.503539 kernel: x86/mm: Memory block size: 128MB Oct 13 05:34:50.503551 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Oct 13 05:34:50.503561 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:34:50.503571 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 05:34:50.503581 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:34:50.503591 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:34:50.503601 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:34:50.503611 kernel: audit: type=2000 audit(1760333684.030:1): state=initialized audit_enabled=0 res=1 Oct 13 05:34:50.503623 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:34:50.503634 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:34:50.503644 kernel: cpuidle: using governor menu Oct 13 05:34:50.503653 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:34:50.503663 kernel: dca service started, version 1.12.1 Oct 13 05:34:50.503673 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Oct 13 05:34:50.503683 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Oct 13 05:34:50.503694 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:34:50.503705 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:34:50.503715 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:34:50.503724 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:34:50.503734 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:34:50.503745 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:34:50.503755 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:34:50.503766 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:34:50.503776 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:34:50.503786 kernel: ACPI: Interpreter enabled Oct 13 05:34:50.503796 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:34:50.503806 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:34:50.503816 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:34:50.503826 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 13 05:34:50.503836 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Oct 13 05:34:50.503845 kernel: iommu: Default domain type: Translated Oct 13 05:34:50.503860 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:34:50.503876 kernel: efivars: Registered efivars operations Oct 13 05:34:50.503894 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:34:50.503912 kernel: PCI: System does not support PCI Oct 13 05:34:50.503929 kernel: vgaarb: loaded Oct 13 05:34:50.503951 kernel: clocksource: Switched to clocksource tsc-early Oct 13 05:34:50.503962 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:34:50.503972 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:34:50.503982 kernel: pnp: PnP ACPI init Oct 13 05:34:50.503992 kernel: pnp: PnP ACPI: found 3 devices Oct 13 05:34:50.504002 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:34:50.504012 kernel: NET: Registered PF_INET protocol family Oct 13 05:34:50.504024 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 05:34:50.504033 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 13 05:34:50.504043 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:34:50.504053 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:34:50.504063 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 13 05:34:50.504073 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 13 05:34:50.504083 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 13 05:34:50.504094 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 13 05:34:50.504104 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:34:50.504114 kernel: NET: Registered PF_XDP protocol family Oct 13 05:34:50.504124 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:34:50.504133 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 13 05:34:50.504143 kernel: software IO TLB: mapped [mem 0x000000003a9c7000-0x000000003e9c7000] (64MB) Oct 13 05:34:50.504153 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Oct 13 05:34:50.504165 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Oct 13 05:34:50.504175 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 13 05:34:50.504185 kernel: clocksource: Switched to clocksource tsc Oct 13 05:34:50.504195 kernel: Initialise system trusted keyrings Oct 13 05:34:50.504204 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 13 05:34:50.504214 kernel: Key type asymmetric registered Oct 13 05:34:50.504224 kernel: Asymmetric key parser 'x509' registered Oct 13 05:34:50.504235 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:34:50.504245 kernel: io scheduler mq-deadline registered Oct 13 05:34:50.504254 kernel: io scheduler kyber registered Oct 13 05:34:50.504264 kernel: io scheduler bfq registered Oct 13 05:34:50.504274 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:34:50.504284 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:34:50.504294 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:34:50.504304 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 13 05:34:50.504315 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:34:50.504325 kernel: i8042: PNP: No PS/2 controller found. Oct 13 05:34:50.504535 kernel: rtc_cmos 00:02: registered as rtc0 Oct 13 05:34:50.504646 kernel: rtc_cmos 00:02: setting system clock to 2025-10-13T05:34:45 UTC (1760333685) Oct 13 05:34:50.504748 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Oct 13 05:34:50.504761 kernel: intel_pstate: Intel P-state driver initializing Oct 13 05:34:50.504771 kernel: efifb: probing for efifb Oct 13 05:34:50.504781 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 13 05:34:50.504791 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 13 05:34:50.504801 kernel: efifb: scrolling: redraw Oct 13 05:34:50.504811 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:34:50.504820 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 05:34:50.504830 kernel: fb0: EFI VGA frame buffer device Oct 13 05:34:50.504841 kernel: pstore: Using crash dump compression: deflate Oct 13 05:34:50.504851 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:34:50.504861 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:34:50.504870 kernel: Segment Routing with IPv6 Oct 13 05:34:50.504880 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:34:50.504890 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:34:50.504899 kernel: Key type dns_resolver registered Oct 13 05:34:50.504910 kernel: IPI shorthand broadcast: enabled Oct 13 05:34:50.504920 kernel: sched_clock: Marking stable (1697069598, 109982622)->(2185221473, -378169253) Oct 13 05:34:50.504930 kernel: registered taskstats version 1 Oct 13 05:34:50.504939 kernel: Loading compiled-in X.509 certificates Oct 13 05:34:50.504949 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:34:50.504959 kernel: Demotion targets for Node 0: null Oct 13 05:34:50.504969 kernel: Key type .fscrypt registered Oct 13 05:34:50.504980 kernel: Key type fscrypt-provisioning registered Oct 13 05:34:50.504990 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:34:50.505000 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:34:50.505010 kernel: ima: No architecture policies found Oct 13 05:34:50.505019 kernel: clk: Disabling unused clocks Oct 13 05:34:50.505029 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:34:50.505039 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:34:50.505050 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:34:50.505060 kernel: Run /init as init process Oct 13 05:34:50.505070 kernel: with arguments: Oct 13 05:34:50.505080 kernel: /init Oct 13 05:34:50.505089 kernel: with environment: Oct 13 05:34:50.505099 kernel: HOME=/ Oct 13 05:34:50.505108 kernel: TERM=linux Oct 13 05:34:50.505120 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:34:50.505129 kernel: hv_vmbus: Vmbus version:5.3 Oct 13 05:34:50.505140 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.505149 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 13 05:34:50.505159 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 13 05:34:50.505169 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.505179 kernel: PTP clock support registered Oct 13 05:34:50.505188 kernel: hv_utils: Registering HyperV Utility Driver Oct 13 05:34:50.505200 kernel: hv_vmbus: registering driver hv_utils Oct 13 05:34:50.505210 kernel: hv_utils: Shutdown IC version 3.2 Oct 13 05:34:50.505219 kernel: hv_utils: Heartbeat IC version 3.0 Oct 13 05:34:50.505229 kernel: hv_utils: TimeSync IC version 4.0 Oct 13 05:34:50.505238 kernel: hv_vmbus: registering driver hv_pci Oct 13 05:34:50.505393 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Oct 13 05:34:50.505512 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Oct 13 05:34:50.505639 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Oct 13 05:34:50.505752 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Oct 13 05:34:50.505893 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Oct 13 05:34:50.506019 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Oct 13 05:34:50.506032 kernel: SCSI subsystem initialized Oct 13 05:34:50.506042 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.506155 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Oct 13 05:34:50.506277 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Oct 13 05:34:50.506288 kernel: hv_vmbus: registering driver hv_storvsc Oct 13 05:34:50.506427 kernel: scsi host0: storvsc_host_t Oct 13 05:34:50.506567 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Oct 13 05:34:50.506579 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 05:34:50.506589 kernel: hv_vmbus: registering driver hid_hyperv Oct 13 05:34:50.506599 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Oct 13 05:34:50.506715 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 13 05:34:50.506729 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 13 05:34:50.506740 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Oct 13 05:34:50.506848 kernel: nvme nvme0: pci function c05b:00:00.0 Oct 13 05:34:50.506975 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Oct 13 05:34:50.507068 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 13 05:34:50.507080 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 13 05:34:50.507203 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 13 05:34:50.507216 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:34:50.507338 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 13 05:34:50.507364 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:34:50.507383 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:34:50.507393 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:34:50.507404 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:34:50.507416 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.507425 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.507435 kernel: raid6: avx512x4 gen() 42424 MB/s Oct 13 05:34:50.507445 kernel: raid6: avx512x2 gen() 41912 MB/s Oct 13 05:34:50.507455 kernel: raid6: avx512x1 gen() 25992 MB/s Oct 13 05:34:50.507465 kernel: raid6: avx2x4 gen() 35368 MB/s Oct 13 05:34:50.507475 kernel: raid6: avx2x2 gen() 36259 MB/s Oct 13 05:34:50.507484 kernel: raid6: avx2x1 gen() 29671 MB/s Oct 13 05:34:50.507496 kernel: raid6: using algorithm avx512x4 gen() 42424 MB/s Oct 13 05:34:50.507506 kernel: raid6: .... xor() 7445 MB/s, rmw enabled Oct 13 05:34:50.507516 kernel: raid6: using avx512x2 recovery algorithm Oct 13 05:34:50.507526 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.507536 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.507546 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.507555 kernel: xor: automatically using best checksumming function avx Oct 13 05:34:50.507567 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.507583 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:34:50.507593 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (883) Oct 13 05:34:50.507604 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:34:50.507614 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:34:50.507624 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 13 05:34:50.507634 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:34:50.507644 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:34:50.507656 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:34:50.507665 kernel: loop: module loaded Oct 13 05:34:50.507675 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:34:50.507685 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:34:50.507697 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:34:50.507710 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:34:50.507723 systemd[1]: Detected virtualization microsoft. Oct 13 05:34:50.507733 systemd[1]: Detected architecture x86-64. Oct 13 05:34:50.507743 systemd[1]: Running in initrd. Oct 13 05:34:50.507754 systemd[1]: No hostname configured, using default hostname. Oct 13 05:34:50.507764 systemd[1]: Hostname set to . Oct 13 05:34:50.507775 systemd[1]: Initializing machine ID from random generator. Oct 13 05:34:50.507787 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:34:50.507798 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:34:50.507808 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:34:50.507818 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:34:50.507829 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:34:50.507840 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:34:50.507853 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:34:50.507864 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:34:50.507876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:34:50.507887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:34:50.507899 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:34:50.507910 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:34:50.507920 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:34:50.507931 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:34:50.507941 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:34:50.507952 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:34:50.507962 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:34:50.507975 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:34:50.507985 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:34:50.507996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:34:50.508006 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:34:50.508017 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:34:50.508027 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:34:50.508038 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:34:50.508050 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:34:50.508061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:34:50.508071 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:34:50.508082 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:34:50.508093 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:34:50.508104 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:34:50.508116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:34:50.508126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:34:50.508138 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:34:50.508149 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:34:50.508161 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:34:50.508172 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:34:50.508199 systemd-journald[1016]: Collecting audit messages is disabled. Oct 13 05:34:50.508225 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:34:50.508237 systemd-journald[1016]: Journal started Oct 13 05:34:50.508261 systemd-journald[1016]: Runtime Journal (/run/log/journal/4be7674ca27e4748b7059b0a487bb02a) is 8M, max 158.9M, 150.9M free. Oct 13 05:34:50.511387 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:34:50.514502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:34:50.525483 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:34:50.533936 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:34:50.569161 systemd-tmpfiles[1034]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:34:50.571054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:34:50.657173 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:34:50.663248 systemd-modules-load[1017]: Inserted module 'br_netfilter' Oct 13 05:34:50.664848 kernel: Bridge firewalling registered Oct 13 05:34:50.663987 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:34:50.666197 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:34:50.697250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:34:50.701447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:34:50.704491 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:34:50.707854 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:34:50.734474 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:34:50.741571 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:34:50.755874 systemd-resolved[1049]: Positive Trust Anchors: Oct 13 05:34:50.755889 systemd-resolved[1049]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:34:50.755893 systemd-resolved[1049]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:34:50.755926 systemd-resolved[1049]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:34:50.776159 systemd-resolved[1049]: Defaulting to hostname 'linux'. Oct 13 05:34:50.777009 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:34:50.782593 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:34:50.810055 dracut-cmdline[1061]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:34:50.945405 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:34:51.033393 kernel: iscsi: registered transport (tcp) Oct 13 05:34:51.102878 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:34:51.102946 kernel: QLogic iSCSI HBA Driver Oct 13 05:34:51.170741 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:34:51.190183 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:34:51.199804 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:34:51.230579 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:34:51.235535 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:34:51.237552 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:34:51.268908 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:34:51.275499 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:34:51.304113 systemd-udevd[1308]: Using default interface naming scheme 'v257'. Oct 13 05:34:51.315956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:34:51.320155 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:34:51.341808 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:34:51.346485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:34:51.353752 dracut-pre-trigger[1380]: rd.md=0: removing MD RAID activation Oct 13 05:34:51.380950 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:34:51.383797 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:34:51.402590 systemd-networkd[1403]: lo: Link UP Oct 13 05:34:51.402802 systemd-networkd[1403]: lo: Gained carrier Oct 13 05:34:51.403537 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:34:51.407414 systemd[1]: Reached target network.target - Network. Oct 13 05:34:51.436411 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:34:51.441484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:34:51.508926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:34:51.509030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:34:51.510815 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:34:51.524450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:34:51.556518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:34:51.558568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:34:51.564483 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:34:51.569388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#238 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 05:34:51.572492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:34:51.602397 kernel: hv_vmbus: registering driver hv_netvsc Oct 13 05:34:51.611541 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52886fc2 (unnamed net_device) (uninitialized): VF slot 1 added Oct 13 05:34:51.633810 systemd-networkd[1403]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:34:51.633817 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:34:51.634516 systemd-networkd[1403]: eth0: Link UP Oct 13 05:34:51.634658 systemd-networkd[1403]: eth0: Gained carrier Oct 13 05:34:51.634668 systemd-networkd[1403]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:34:51.652136 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:34:51.658258 kernel: AES CTR mode by8 optimization enabled Oct 13 05:34:51.656470 systemd-networkd[1403]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 13 05:34:51.797444 kernel: nvme nvme0: using unchecked data buffer Oct 13 05:34:51.922146 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Oct 13 05:34:51.925912 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:34:52.053573 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Oct 13 05:34:52.081197 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Oct 13 05:34:52.092567 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Oct 13 05:34:52.188874 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:34:52.191684 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:34:52.196678 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:34:52.198711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:34:52.201603 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:34:52.223797 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:34:52.638470 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Oct 13 05:34:52.638760 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Oct 13 05:34:52.641413 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Oct 13 05:34:52.644186 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Oct 13 05:34:52.649579 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Oct 13 05:34:52.653440 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Oct 13 05:34:52.658490 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Oct 13 05:34:52.658572 kernel: pci 7870:00:00.0: enabling Extended Tags Oct 13 05:34:52.674390 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Oct 13 05:34:52.674638 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Oct 13 05:34:52.678668 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Oct 13 05:34:52.702191 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Oct 13 05:34:52.712455 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Oct 13 05:34:52.712704 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52886fc2 eth0: VF registering: eth1 Oct 13 05:34:52.714778 kernel: mana 7870:00:00.0 eth1: joined to eth0 Oct 13 05:34:52.718983 systemd-networkd[1403]: eth1: Interface name change detected, renamed to enP30832s1. Oct 13 05:34:52.721586 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Oct 13 05:34:52.822386 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Oct 13 05:34:52.825395 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:34:52.827610 systemd-networkd[1403]: enP30832s1: Link UP Oct 13 05:34:52.830464 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52886fc2 eth0: Data path switched to VF: enP30832s1 Oct 13 05:34:52.828970 systemd-networkd[1403]: enP30832s1: Gained carrier Oct 13 05:34:53.216790 disk-uuid[1582]: Warning: The kernel is still using the old partition table. Oct 13 05:34:53.216790 disk-uuid[1582]: The new table will be used at the next reboot or after you Oct 13 05:34:53.216790 disk-uuid[1582]: run partprobe(8) or kpartx(8) Oct 13 05:34:53.216790 disk-uuid[1582]: The operation has completed successfully. Oct 13 05:34:53.227212 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:34:53.227315 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:34:53.228687 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:34:53.296386 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1629) Oct 13 05:34:53.296431 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:34:53.298525 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:34:53.397822 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:34:53.397947 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:34:53.398029 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:34:53.404428 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:34:53.405498 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:34:53.411283 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:34:53.515482 systemd-networkd[1403]: eth0: Gained IPv6LL Oct 13 05:34:54.526264 ignition[1648]: Ignition 2.22.0 Oct 13 05:34:54.526276 ignition[1648]: Stage: fetch-offline Oct 13 05:34:54.528189 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:34:54.526415 ignition[1648]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:34:54.526424 ignition[1648]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:34:54.535502 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 05:34:54.526516 ignition[1648]: parsed url from cmdline: "" Oct 13 05:34:54.526518 ignition[1648]: no config URL provided Oct 13 05:34:54.526527 ignition[1648]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:34:54.526536 ignition[1648]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:34:54.526542 ignition[1648]: failed to fetch config: resource requires networking Oct 13 05:34:54.526771 ignition[1648]: Ignition finished successfully Oct 13 05:34:54.570036 ignition[1654]: Ignition 2.22.0 Oct 13 05:34:54.570046 ignition[1654]: Stage: fetch Oct 13 05:34:54.570261 ignition[1654]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:34:54.570269 ignition[1654]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:34:54.570351 ignition[1654]: parsed url from cmdline: "" Oct 13 05:34:54.570354 ignition[1654]: no config URL provided Oct 13 05:34:54.570359 ignition[1654]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:34:54.570364 ignition[1654]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:34:54.570407 ignition[1654]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 13 05:34:54.628613 ignition[1654]: GET result: OK Oct 13 05:34:54.628685 ignition[1654]: config has been read from IMDS userdata Oct 13 05:34:54.628724 ignition[1654]: parsing config with SHA512: 14d2f33905a56308cd6d2e644df56337c801aa2075c1359ab2b7de12895d52a433f4b2a52b3ce683ab0b5939d821dba5fdfdb5eb1177584231283acdb78abba3 Oct 13 05:34:54.634862 unknown[1654]: fetched base config from "system" Oct 13 05:34:54.634872 unknown[1654]: fetched base config from "system" Oct 13 05:34:54.635208 ignition[1654]: fetch: fetch complete Oct 13 05:34:54.634877 unknown[1654]: fetched user config from "azure" Oct 13 05:34:54.635213 ignition[1654]: fetch: fetch passed Oct 13 05:34:54.637523 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 05:34:54.635249 ignition[1654]: Ignition finished successfully Oct 13 05:34:54.645399 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:34:54.671874 ignition[1660]: Ignition 2.22.0 Oct 13 05:34:54.671885 ignition[1660]: Stage: kargs Oct 13 05:34:54.674605 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:34:54.672114 ignition[1660]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:34:54.677858 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:34:54.672122 ignition[1660]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:34:54.672999 ignition[1660]: kargs: kargs passed Oct 13 05:34:54.673045 ignition[1660]: Ignition finished successfully Oct 13 05:34:54.703096 ignition[1667]: Ignition 2.22.0 Oct 13 05:34:54.703107 ignition[1667]: Stage: disks Oct 13 05:34:54.705449 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:34:54.703328 ignition[1667]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:34:54.708089 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:34:54.703339 ignition[1667]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:34:54.711427 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:34:54.704226 ignition[1667]: disks: disks passed Oct 13 05:34:54.714311 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:34:54.704258 ignition[1667]: Ignition finished successfully Oct 13 05:34:54.716778 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:34:54.717275 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:34:54.718053 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:34:54.948947 systemd-fsck[1676]: ROOT: clean, 15/7340400 files, 470001/7359488 blocks Oct 13 05:34:54.952314 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:34:54.959750 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:34:57.916477 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:34:57.917090 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:34:57.919973 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:34:57.971984 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:34:58.002456 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:34:58.003655 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 13 05:34:58.011144 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:34:58.012242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:34:58.019446 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:34:58.023076 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:34:58.061384 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1686) Oct 13 05:34:58.064331 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:34:58.064364 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:34:58.070691 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:34:58.070729 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:34:58.070742 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:34:58.072843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:34:58.670472 coreos-metadata[1688]: Oct 13 05:34:58.670 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 05:34:58.674546 coreos-metadata[1688]: Oct 13 05:34:58.674 INFO Fetch successful Oct 13 05:34:58.676154 coreos-metadata[1688]: Oct 13 05:34:58.675 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 13 05:34:58.686817 coreos-metadata[1688]: Oct 13 05:34:58.686 INFO Fetch successful Oct 13 05:34:58.702724 coreos-metadata[1688]: Oct 13 05:34:58.702 INFO wrote hostname ci-4487.0.0-a-dfb3332019 to /sysroot/etc/hostname Oct 13 05:34:58.706661 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:34:59.002170 initrd-setup-root[1717]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:34:59.049905 initrd-setup-root[1724]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:34:59.054616 initrd-setup-root[1731]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:34:59.110857 initrd-setup-root[1738]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:35:01.393726 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:35:01.397473 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:35:01.412549 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:35:01.438833 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:35:01.446046 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:35:01.471694 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:35:01.480302 ignition[1810]: INFO : Ignition 2.22.0 Oct 13 05:35:01.480302 ignition[1810]: INFO : Stage: mount Oct 13 05:35:01.483617 ignition[1810]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:35:01.483617 ignition[1810]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:35:01.483617 ignition[1810]: INFO : mount: mount passed Oct 13 05:35:01.483617 ignition[1810]: INFO : Ignition finished successfully Oct 13 05:35:01.483982 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:35:01.488443 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:35:01.506751 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:35:01.522386 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1820) Oct 13 05:35:01.525873 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:35:01.525906 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:35:01.530473 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:35:01.530509 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:35:01.530591 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:35:01.532933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:35:01.559514 ignition[1837]: INFO : Ignition 2.22.0 Oct 13 05:35:01.559514 ignition[1837]: INFO : Stage: files Oct 13 05:35:01.563417 ignition[1837]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:35:01.563417 ignition[1837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:35:01.563417 ignition[1837]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:35:01.593789 ignition[1837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:35:01.593789 ignition[1837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:35:01.668637 ignition[1837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:35:01.671270 ignition[1837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:35:01.671270 ignition[1837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:35:01.668919 unknown[1837]: wrote ssh authorized keys file for user: core Oct 13 05:35:01.785944 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:35:01.788831 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:35:01.835028 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:35:01.880799 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:35:01.884014 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:35:01.884014 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:35:01.884014 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:35:01.884014 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:35:01.884014 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:35:01.884014 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:35:01.884014 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:35:01.884014 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:35:01.908404 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:35:01.908404 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:35:01.908404 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:35:01.908404 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:35:01.908404 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:35:01.908404 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 05:35:02.163294 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:35:02.782258 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:35:02.782258 ignition[1837]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:35:02.882608 ignition[1837]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:35:02.955198 ignition[1837]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:35:02.955198 ignition[1837]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:35:02.955198 ignition[1837]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:35:02.966466 ignition[1837]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:35:02.966466 ignition[1837]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:35:02.966466 ignition[1837]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:35:02.966466 ignition[1837]: INFO : files: files passed Oct 13 05:35:02.966466 ignition[1837]: INFO : Ignition finished successfully Oct 13 05:35:02.961823 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:35:02.966363 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:35:02.983038 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:35:03.022302 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:35:03.022429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:35:03.038397 initrd-setup-root-after-ignition[1867]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:35:03.042880 initrd-setup-root-after-ignition[1871]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:35:03.040311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:35:03.054624 initrd-setup-root-after-ignition[1867]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:35:03.043825 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:35:03.051499 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:35:03.097755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:35:03.097841 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:35:03.101655 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:35:03.105018 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:35:03.108413 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:35:03.109024 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:35:03.138500 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:35:03.141496 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:35:03.155662 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:35:03.155883 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:35:03.164560 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:35:03.167863 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:35:03.172533 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:35:03.172668 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:35:03.183843 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:35:03.187151 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:35:03.193536 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:35:03.197506 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:35:03.201512 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:35:03.203814 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:35:03.204147 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:35:03.209458 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:35:03.212635 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:35:03.218480 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:35:03.221130 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:35:03.223087 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:35:03.223483 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:35:03.229901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:35:03.233819 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:35:03.238513 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:35:03.238815 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:35:03.243426 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:35:03.243560 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:35:03.250789 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:35:03.250913 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:35:03.257476 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:35:03.257579 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:35:03.262953 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 13 05:35:03.263106 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:35:03.271792 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:35:03.277918 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:35:03.279591 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:35:03.285959 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:35:03.288806 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:35:03.288951 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:35:03.293609 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:35:03.293719 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:35:03.302880 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:35:03.302991 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:35:03.317591 ignition[1891]: INFO : Ignition 2.22.0 Oct 13 05:35:03.317591 ignition[1891]: INFO : Stage: umount Oct 13 05:35:03.327092 ignition[1891]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:35:03.327092 ignition[1891]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:35:03.327092 ignition[1891]: INFO : umount: umount passed Oct 13 05:35:03.327092 ignition[1891]: INFO : Ignition finished successfully Oct 13 05:35:03.322975 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:35:03.323058 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:35:03.325921 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:35:03.326005 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:35:03.338072 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:35:03.338144 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:35:03.338977 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:35:03.339023 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:35:03.339277 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 05:35:03.339307 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 05:35:03.339660 systemd[1]: Stopped target network.target - Network. Oct 13 05:35:03.339691 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:35:03.339728 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:35:03.340018 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:35:03.340039 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:35:03.341430 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:35:03.349426 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:35:03.352160 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:35:03.354444 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:35:03.354481 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:35:03.356964 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:35:03.356992 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:35:03.358290 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:35:03.358333 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:35:03.362435 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:35:03.362473 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:35:03.365218 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:35:03.368314 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:35:03.371982 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:35:03.372072 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:35:03.376865 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:35:03.376963 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:35:03.382014 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:35:03.385398 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:35:03.385436 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:35:03.391962 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:35:03.395689 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:35:03.397266 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:35:03.399858 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:35:03.399907 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:35:03.401873 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:35:03.401920 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:35:03.404948 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:35:03.408532 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:35:03.426939 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:35:03.427076 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:35:03.462778 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:35:03.462835 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:35:03.463185 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:35:03.463211 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:35:03.467295 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:35:03.467343 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:35:03.478840 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:35:03.480186 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:35:03.483284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:35:03.484663 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:35:03.488363 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:35:03.488763 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:35:03.488805 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:35:03.488855 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:35:03.488888 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:35:03.489112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:35:03.489142 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:35:03.503977 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:35:03.504045 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:35:03.602605 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52886fc2 eth0: Data path switched from VF: enP30832s1 Oct 13 05:35:03.602890 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:35:03.604941 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:35:03.606428 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:35:03.936978 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:35:03.937089 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:35:03.944034 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:35:03.944406 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:35:03.944461 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:35:03.947509 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:35:03.982623 systemd[1]: Switching root. Oct 13 05:35:04.062224 systemd-journald[1016]: Journal stopped Oct 13 05:35:13.266313 systemd-journald[1016]: Received SIGTERM from PID 1 (systemd). Oct 13 05:35:13.266335 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:35:13.266345 kernel: SELinux: policy capability open_perms=1 Oct 13 05:35:13.266352 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:35:13.266359 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:35:13.266375 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:35:13.266384 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:35:13.266391 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:35:13.266397 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:35:13.266403 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:35:13.266409 kernel: audit: type=1403 audit(1760333705.421:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:35:13.266417 systemd[1]: Successfully loaded SELinux policy in 310.075ms. Oct 13 05:35:13.266426 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.415ms. Oct 13 05:35:13.266434 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:35:13.266442 systemd[1]: Detected virtualization microsoft. Oct 13 05:35:13.266451 systemd[1]: Detected architecture x86-64. Oct 13 05:35:13.266458 systemd[1]: Detected first boot. Oct 13 05:35:13.266465 systemd[1]: Hostname set to . Oct 13 05:35:13.266472 systemd[1]: Initializing machine ID from random generator. Oct 13 05:35:13.266479 zram_generator::config[1934]: No configuration found. Oct 13 05:35:13.266486 kernel: Guest personality initialized and is inactive Oct 13 05:35:13.266494 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Oct 13 05:35:13.266501 kernel: Initialized host personality Oct 13 05:35:13.266507 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:35:13.266514 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:35:13.266523 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:35:13.266530 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:35:13.266539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:35:13.266547 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:35:13.266553 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:35:13.266560 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:35:13.266567 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:35:13.266574 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:35:13.266582 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:35:13.266589 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:35:13.266596 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:35:13.266603 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:35:13.266610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:35:13.266616 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:35:13.266625 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:35:13.266633 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:35:13.266641 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:35:13.266648 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:35:13.266655 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:35:13.266662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:35:13.266671 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:35:13.266679 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:35:13.266686 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:35:13.266694 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:35:13.266701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:35:13.266708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:35:13.266715 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:35:13.266724 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:35:13.266731 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:35:13.266738 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:35:13.266745 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:35:13.266753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:35:13.266761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:35:13.266769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:35:13.266776 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:35:13.266783 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:35:13.266790 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:35:13.266797 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:35:13.266806 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:13.266813 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:35:13.266821 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:35:13.266828 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:35:13.266836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:35:13.266844 systemd[1]: Reached target machines.target - Containers. Oct 13 05:35:13.266853 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:35:13.266860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:35:13.266868 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:35:13.266875 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:35:13.266882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:35:13.266890 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:35:13.266897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:35:13.266906 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:35:13.266913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:35:13.266920 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:35:13.266927 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:35:13.266935 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:35:13.266942 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:35:13.266949 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:35:13.266958 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:35:13.266966 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:35:13.266972 kernel: fuse: init (API version 7.41) Oct 13 05:35:13.266988 systemd-journald[2017]: Collecting audit messages is disabled. Oct 13 05:35:13.267006 systemd-journald[2017]: Journal started Oct 13 05:35:13.267025 systemd-journald[2017]: Runtime Journal (/run/log/journal/9cc1a1f42b074b90aa82915840dc7799) is 8M, max 158.9M, 150.9M free. Oct 13 05:35:12.664039 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:35:12.674903 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 13 05:35:12.675344 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:35:13.275394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:35:13.280426 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:35:13.292994 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:35:13.298706 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:35:13.308399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:35:13.308440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:13.312472 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:35:13.318266 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:35:13.322151 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:35:13.325578 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:35:13.327391 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:35:13.329023 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:35:13.330814 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:35:13.332617 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:35:13.336950 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:35:13.337209 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:35:13.340708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:35:13.340969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:35:13.344723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:35:13.344974 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:35:13.346975 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:35:13.347220 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:35:13.349267 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:35:13.349649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:35:13.353905 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:35:13.357424 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:35:13.371627 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:35:13.393936 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:35:13.398130 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:35:13.401759 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:35:13.403766 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:35:13.404452 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:35:13.408705 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:35:13.411307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:35:13.431443 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:35:13.436678 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:35:13.441484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:35:13.443574 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:35:13.446101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:35:13.448500 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:35:13.455494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:35:13.458335 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:35:13.461285 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:35:13.464176 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:35:13.472767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:35:13.476607 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:35:13.484273 systemd-journald[2017]: Time spent on flushing to /var/log/journal/9cc1a1f42b074b90aa82915840dc7799 is 17.333ms for 979 entries. Oct 13 05:35:13.484273 systemd-journald[2017]: System Journal (/var/log/journal/9cc1a1f42b074b90aa82915840dc7799) is 8M, max 2.6G, 2.6G free. Oct 13 05:35:13.652598 systemd-journald[2017]: Received client request to flush runtime journal. Oct 13 05:35:13.652657 kernel: ACPI: bus type drm_connector registered Oct 13 05:35:13.652682 kernel: loop1: detected capacity change from 0 to 219144 Oct 13 05:35:13.495448 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:35:13.504337 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:35:13.505477 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:35:13.528163 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:35:13.564687 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:35:13.567845 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:35:13.572591 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:35:13.653555 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:35:13.673564 kernel: loop2: detected capacity change from 0 to 27752 Oct 13 05:35:13.772159 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:35:13.773179 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:35:13.838604 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:35:14.532402 kernel: loop3: detected capacity change from 0 to 128048 Oct 13 05:35:14.720299 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:35:14.724177 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:35:14.730505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:35:14.833090 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:35:14.878491 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:35:14.928880 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:35:14.953398 systemd-tmpfiles[2093]: ACLs are not supported, ignoring. Oct 13 05:35:14.953415 systemd-tmpfiles[2093]: ACLs are not supported, ignoring. Oct 13 05:35:14.957219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:35:14.965584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:35:14.995891 systemd-udevd[2104]: Using default interface naming scheme 'v257'. Oct 13 05:35:15.093624 systemd-resolved[2092]: Positive Trust Anchors: Oct 13 05:35:15.093641 systemd-resolved[2092]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:35:15.093645 systemd-resolved[2092]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:35:15.093677 systemd-resolved[2092]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:35:15.238415 systemd-resolved[2092]: Using system hostname 'ci-4487.0.0-a-dfb3332019'. Oct 13 05:35:15.239387 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:35:15.241024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:35:15.425393 kernel: loop4: detected capacity change from 0 to 110984 Oct 13 05:35:15.914405 kernel: loop5: detected capacity change from 0 to 219144 Oct 13 05:35:15.957600 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:35:15.963175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:35:15.972901 kernel: loop6: detected capacity change from 0 to 27752 Oct 13 05:35:15.983395 kernel: loop7: detected capacity change from 0 to 128048 Oct 13 05:35:16.015492 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:35:16.078389 kernel: loop1: detected capacity change from 0 to 110984 Oct 13 05:35:16.088530 (sd-merge)[2107]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Oct 13 05:35:16.092479 kernel: hv_vmbus: registering driver hyperv_fb Oct 13 05:35:16.092539 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 13 05:35:16.097270 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 13 05:35:16.096824 (sd-merge)[2107]: Merged extensions into '/usr'. Oct 13 05:35:16.099300 kernel: Console: switching to colour dummy device 80x25 Oct 13 05:35:16.103921 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 05:35:16.102844 systemd[1]: Reload requested from client PID 2070 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:35:16.102864 systemd[1]: Reloading... Oct 13 05:35:16.150404 kernel: hv_vmbus: registering driver hv_balloon Oct 13 05:35:16.171390 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 13 05:35:16.199463 zram_generator::config[2187]: No configuration found. Oct 13 05:35:16.232408 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:35:16.257392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 05:35:16.384726 systemd-networkd[2117]: lo: Link UP Oct 13 05:35:16.384741 systemd-networkd[2117]: lo: Gained carrier Oct 13 05:35:16.396911 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Oct 13 05:35:16.390739 systemd-networkd[2117]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:35:16.390746 systemd-networkd[2117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:35:16.433795 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:35:16.434055 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52886fc2 eth0: Data path switched to VF: enP30832s1 Oct 13 05:35:16.422723 systemd-networkd[2117]: enP30832s1: Link UP Oct 13 05:35:16.422817 systemd-networkd[2117]: eth0: Link UP Oct 13 05:35:16.422821 systemd-networkd[2117]: eth0: Gained carrier Oct 13 05:35:16.422836 systemd-networkd[2117]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:35:16.440406 systemd-networkd[2117]: enP30832s1: Gained carrier Oct 13 05:35:16.451413 systemd-networkd[2117]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 13 05:35:16.543386 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Oct 13 05:35:16.587872 systemd[1]: Reloading finished in 484 ms. Oct 13 05:35:16.607084 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:35:16.610652 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:35:16.625779 systemd[1]: Reached target network.target - Network. Oct 13 05:35:16.632145 systemd[1]: Starting ensure-sysext.service... Oct 13 05:35:16.636646 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:35:16.640658 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:35:16.646136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:35:16.651219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:35:16.669290 systemd[1]: Reload requested from client PID 2255 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:35:16.669402 systemd[1]: Reloading... Oct 13 05:35:16.736464 zram_generator::config[2294]: No configuration found. Oct 13 05:35:16.862323 systemd-tmpfiles[2258]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:35:16.862352 systemd-tmpfiles[2258]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:35:16.862632 systemd-tmpfiles[2258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:35:16.862849 systemd-tmpfiles[2258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:35:16.863573 systemd-tmpfiles[2258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:35:16.863778 systemd-tmpfiles[2258]: ACLs are not supported, ignoring. Oct 13 05:35:16.863856 systemd-tmpfiles[2258]: ACLs are not supported, ignoring. Oct 13 05:35:16.918379 systemd[1]: Reloading finished in 248 ms. Oct 13 05:35:16.954537 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:35:16.960749 systemd-tmpfiles[2258]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:35:16.960844 systemd-tmpfiles[2258]: Skipping /boot Oct 13 05:35:16.967927 systemd-tmpfiles[2258]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:35:16.968185 systemd-tmpfiles[2258]: Skipping /boot Oct 13 05:35:16.977326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:16.977595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:35:16.979577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:35:16.983820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:35:16.991838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:35:16.994296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:35:16.994457 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:35:16.994786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:17.002161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:35:17.007941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:35:17.008115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:35:17.013977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:35:17.014232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:35:17.017863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:35:17.019610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:35:17.024756 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:35:17.024928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:35:17.052668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Oct 13 05:35:17.056265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:17.057282 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:35:17.061868 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:35:17.064116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:35:17.066575 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:35:17.069734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:35:17.075578 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:35:17.078330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:35:17.084703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:35:17.086659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:35:17.091981 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:35:17.094491 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:35:17.096410 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:35:17.099562 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:35:17.105588 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:35:17.116547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:35:17.118650 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:35:17.121699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:35:17.121889 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:35:17.126767 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:35:17.126945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:35:17.130035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:35:17.130224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:35:17.130678 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:35:17.131004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:35:17.136664 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:35:17.138622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:35:17.146504 systemd[1]: Finished ensure-sysext.service. Oct 13 05:35:17.149987 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:35:17.204863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:35:17.526655 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:35:17.854514 augenrules[2415]: No rules Oct 13 05:35:17.855523 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:35:17.855823 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:35:18.283511 systemd-networkd[2117]: eth0: Gained IPv6LL Oct 13 05:35:18.285546 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:35:18.285880 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:35:18.914186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:35:20.647316 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:35:20.649625 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:35:28.429347 ldconfig[2376]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:35:28.443696 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:35:28.448886 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:35:28.476698 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:35:28.478435 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:35:28.481524 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:35:28.483193 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:35:28.486429 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:35:28.489535 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:35:28.492503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:35:28.495421 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:35:28.497257 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:35:28.497287 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:35:28.498771 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:35:28.530423 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:35:28.535356 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:35:28.539961 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:35:28.541859 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:35:28.545446 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:35:28.549787 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:35:28.551824 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:35:28.554176 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:35:28.556182 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:35:28.557584 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:35:28.558893 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:35:28.558917 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:35:28.576798 systemd[1]: Starting chronyd.service - NTP client/server... Oct 13 05:35:28.594527 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:35:28.598505 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 05:35:28.603525 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:35:28.606528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:35:28.611546 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:35:28.619745 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:35:28.622181 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:35:28.623575 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:35:28.625896 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Oct 13 05:35:28.628947 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Oct 13 05:35:28.632508 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Oct 13 05:35:28.635395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:35:28.640485 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:35:28.648547 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:35:28.653162 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:35:28.658569 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:35:28.666391 jq[2436]: false Oct 13 05:35:28.665390 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:35:28.680475 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:35:28.682875 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:35:28.683316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:35:28.685484 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:35:28.691560 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:35:28.697793 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:35:28.701220 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:35:28.705584 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:35:28.718225 jq[2453]: true Oct 13 05:35:28.724457 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:35:28.726846 oslogin_cache_refresh[2438]: Refreshing passwd entry cache Oct 13 05:35:28.728945 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Refreshing passwd entry cache Oct 13 05:35:28.724643 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:35:28.740162 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Failure getting users, quitting Oct 13 05:35:28.743596 oslogin_cache_refresh[2438]: Failure getting users, quitting Oct 13 05:35:28.745657 extend-filesystems[2437]: Found /dev/nvme0n1p6 Oct 13 05:35:28.748483 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:35:28.748483 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Refreshing group entry cache Oct 13 05:35:28.743625 oslogin_cache_refresh[2438]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:35:28.743664 oslogin_cache_refresh[2438]: Refreshing group entry cache Oct 13 05:35:28.750554 jq[2474]: true Oct 13 05:35:28.764916 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Failure getting groups, quitting Oct 13 05:35:28.764916 google_oslogin_nss_cache[2438]: oslogin_cache_refresh[2438]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:35:28.764913 oslogin_cache_refresh[2438]: Failure getting groups, quitting Oct 13 05:35:28.764923 oslogin_cache_refresh[2438]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:35:28.768451 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:35:28.771912 chronyd[2431]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Oct 13 05:35:28.773190 KVP[2439]: KVP starting; pid is:2439 Oct 13 05:35:28.775499 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:35:28.779854 extend-filesystems[2437]: Found /dev/nvme0n1p9 Oct 13 05:35:28.781989 kernel: hv_utils: KVP IC version 4.0 Oct 13 05:35:28.781817 KVP[2439]: KVP LIC Version: 3.1 Oct 13 05:35:28.784270 (ntainerd)[2478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:35:28.787311 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:35:28.787555 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:35:28.793123 update_engine[2450]: I20251013 05:35:28.793051 2450 main.cc:92] Flatcar Update Engine starting Oct 13 05:35:28.794193 extend-filesystems[2437]: Checking size of /dev/nvme0n1p9 Oct 13 05:35:28.809540 tar[2465]: linux-amd64/LICENSE Oct 13 05:35:28.811257 tar[2465]: linux-amd64/helm Oct 13 05:35:28.821948 chronyd[2431]: Timezone right/UTC failed leap second check, ignoring Oct 13 05:35:28.822198 systemd[1]: Started chronyd.service - NTP client/server. Oct 13 05:35:28.822092 chronyd[2431]: Loaded seccomp filter (level 2) Oct 13 05:35:28.834060 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:35:28.849362 extend-filesystems[2437]: Resized partition /dev/nvme0n1p9 Oct 13 05:35:28.878120 systemd-logind[2449]: New seat seat0. Oct 13 05:35:28.882675 systemd-logind[2449]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 13 05:35:28.882829 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:35:28.887039 extend-filesystems[2517]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:35:28.988015 bash[2511]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:35:28.987861 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:35:28.994795 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 7359488 to 7376891 blocks Oct 13 05:35:28.992962 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:35:29.014433 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 7376891 Oct 13 05:35:29.069125 extend-filesystems[2517]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 13 05:35:29.069125 extend-filesystems[2517]: old_desc_blocks = 4, new_desc_blocks = 4 Oct 13 05:35:29.069125 extend-filesystems[2517]: The filesystem on /dev/nvme0n1p9 is now 7376891 (4k) blocks long. Oct 13 05:35:29.082985 extend-filesystems[2437]: Resized filesystem in /dev/nvme0n1p9 Oct 13 05:35:29.070228 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:35:29.070469 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:35:29.218835 sshd_keygen[2467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:35:29.272930 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:35:29.277668 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:35:29.281469 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Oct 13 05:35:29.296544 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:35:29.296735 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:35:29.317946 dbus-daemon[2434]: [system] SELinux support is enabled Oct 13 05:35:29.322184 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:35:29.325409 update_engine[2450]: I20251013 05:35:29.325338 2450 update_check_scheduler.cc:74] Next update check in 4m58s Oct 13 05:35:29.327569 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:35:29.339666 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:35:29.339707 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:35:29.342862 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:35:29.342891 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:35:29.344483 dbus-daemon[2434]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:35:29.344979 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:35:29.348943 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:35:29.366272 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Oct 13 05:35:29.373793 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:35:29.377931 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:35:29.381118 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:35:29.383486 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:35:29.435412 tar[2465]: linux-amd64/README.md Oct 13 05:35:29.446359 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:35:29.539033 coreos-metadata[2433]: Oct 13 05:35:29.538 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 05:35:29.541876 coreos-metadata[2433]: Oct 13 05:35:29.541 INFO Fetch successful Oct 13 05:35:29.541956 coreos-metadata[2433]: Oct 13 05:35:29.541 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 13 05:35:29.545847 coreos-metadata[2433]: Oct 13 05:35:29.545 INFO Fetch successful Oct 13 05:35:29.545847 coreos-metadata[2433]: Oct 13 05:35:29.545 INFO Fetching http://168.63.129.16/machine/9fbe98b5-4918-4cc9-bbb7-46d4deb46442/4f3fd602%2Dc244%2D4420%2D92c8%2D376e0419ee94.%5Fci%2D4487.0.0%2Da%2Ddfb3332019?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 13 05:35:29.547920 coreos-metadata[2433]: Oct 13 05:35:29.547 INFO Fetch successful Oct 13 05:35:29.547920 coreos-metadata[2433]: Oct 13 05:35:29.547 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 13 05:35:29.560126 coreos-metadata[2433]: Oct 13 05:35:29.560 INFO Fetch successful Oct 13 05:35:29.580750 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 05:35:29.583861 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:35:29.727282 locksmithd[2564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:35:30.085160 containerd[2478]: time="2025-10-13T05:35:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:35:30.085457 containerd[2478]: time="2025-10-13T05:35:30.085288225Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:35:30.093715 containerd[2478]: time="2025-10-13T05:35:30.093672906Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.049µs" Oct 13 05:35:30.093715 containerd[2478]: time="2025-10-13T05:35:30.093701244Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:35:30.093715 containerd[2478]: time="2025-10-13T05:35:30.093719334Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:35:30.093860 containerd[2478]: time="2025-10-13T05:35:30.093845561Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:35:30.093885 containerd[2478]: time="2025-10-13T05:35:30.093859141Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:35:30.093885 containerd[2478]: time="2025-10-13T05:35:30.093880629Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:35:30.093982 containerd[2478]: time="2025-10-13T05:35:30.093929919Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:35:30.093982 containerd[2478]: time="2025-10-13T05:35:30.093945273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.094144 containerd[2478]: time="2025-10-13T05:35:30.094119508Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.094144 containerd[2478]: time="2025-10-13T05:35:30.094134901Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:35:30.094194 containerd[2478]: time="2025-10-13T05:35:30.094145403Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:35:30.094194 containerd[2478]: time="2025-10-13T05:35:30.094154188Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:35:30.094244 containerd[2478]: time="2025-10-13T05:35:30.094210772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.094378 containerd[2478]: time="2025-10-13T05:35:30.094356692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.094418 containerd[2478]: time="2025-10-13T05:35:30.094403579Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:35:30.094444 containerd[2478]: time="2025-10-13T05:35:30.094416045Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:35:30.094467 containerd[2478]: time="2025-10-13T05:35:30.094441658Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:35:30.094673 containerd[2478]: time="2025-10-13T05:35:30.094650217Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:35:30.094779 containerd[2478]: time="2025-10-13T05:35:30.094715837Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120132718Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120183527Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120199773Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120211519Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120223767Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120234498Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120248515Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120260234Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120270772Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120280956Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120289644Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120302004Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120424127Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:35:30.120547 containerd[2478]: time="2025-10-13T05:35:30.120442738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120457951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120474270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120485473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120500029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120511925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120521908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120533043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120543228Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120553026Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120617141Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120629870Z" level=info msg="Start snapshots syncer" Oct 13 05:35:30.120867 containerd[2478]: time="2025-10-13T05:35:30.120656058Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:35:30.121105 containerd[2478]: time="2025-10-13T05:35:30.120876522Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:35:30.121105 containerd[2478]: time="2025-10-13T05:35:30.120923901Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.120980815Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121058414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121075109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121084659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121096048Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121116597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121127501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121137239Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121162881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121174010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121183499Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121206557Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121219807Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:35:30.121240 containerd[2478]: time="2025-10-13T05:35:30.121228851Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121239359Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121247361Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121260825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121272185Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121286869Z" level=info msg="runtime interface created" Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121292079Z" level=info msg="created NRI interface" Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121300035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121310041Z" level=info msg="Connect containerd service" Oct 13 05:35:30.121536 containerd[2478]: time="2025-10-13T05:35:30.121333997Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:35:30.122395 containerd[2478]: time="2025-10-13T05:35:30.122087100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:35:30.572281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:35:30.580680 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007486163Z" level=info msg="Start subscribing containerd event" Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007547875Z" level=info msg="Start recovering state" Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007665341Z" level=info msg="Start event monitor" Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007678527Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007693127Z" level=info msg="Start streaming server" Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007702346Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007710946Z" level=info msg="runtime interface starting up..." Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007720648Z" level=info msg="starting plugins..." Oct 13 05:35:31.008095 containerd[2478]: time="2025-10-13T05:35:31.007733436Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:35:31.008427 containerd[2478]: time="2025-10-13T05:35:31.008199212Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:35:31.014262 containerd[2478]: time="2025-10-13T05:35:31.008784885Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:35:31.014262 containerd[2478]: time="2025-10-13T05:35:31.009150469Z" level=info msg="containerd successfully booted in 0.924836s" Oct 13 05:35:31.009553 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:35:31.012971 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:35:31.015519 systemd[1]: Startup finished in 5.023s (kernel) + 15.797s (initrd) + 25.902s (userspace) = 46.723s. Oct 13 05:35:31.644330 kubelet[2603]: E1013 05:35:31.644290 2603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:35:31.646217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:35:31.646355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:35:31.646671 systemd[1]: kubelet.service: Consumed 918ms CPU time, 257.6M memory peak. Oct 13 05:35:32.153941 login[2570]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Oct 13 05:35:32.154130 login[2569]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:35:32.160097 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:35:32.164610 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:35:32.170489 systemd-logind[2449]: New session 2 of user core. Oct 13 05:35:32.241763 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:35:32.244260 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:35:32.281418 (systemd)[2620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:35:32.283212 systemd-logind[2449]: New session c1 of user core. Oct 13 05:35:32.889298 systemd[2620]: Queued start job for default target default.target. Oct 13 05:35:32.889935 waagent[2567]: 2025-10-13T05:35:32.889871Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.890205Z INFO Daemon Daemon OS: flatcar 4487.0.0 Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.890461Z INFO Daemon Daemon Python: 3.11.13 Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.890669Z INFO Daemon Daemon Run daemon Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.891065Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4487.0.0' Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.891341Z INFO Daemon Daemon Using waagent for provisioning Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.891522Z INFO Daemon Daemon Activate resource disk Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.891723Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.893314Z INFO Daemon Daemon Found device: None Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.893488Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.893833Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.894318Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 05:35:32.898385 waagent[2567]: 2025-10-13T05:35:32.894551Z INFO Daemon Daemon Running default provisioning handler Oct 13 05:35:32.902054 systemd[2620]: Created slice app.slice - User Application Slice. Oct 13 05:35:32.902536 systemd[2620]: Reached target paths.target - Paths. Oct 13 05:35:32.902573 systemd[2620]: Reached target timers.target - Timers. Oct 13 05:35:32.904524 systemd[2620]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:35:32.913395 waagent[2567]: 2025-10-13T05:35:32.913177Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Oct 13 05:35:32.914710 waagent[2567]: 2025-10-13T05:35:32.914672Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 13 05:35:32.914890 waagent[2567]: 2025-10-13T05:35:32.914869Z INFO Daemon Daemon cloud-init is enabled: False Oct 13 05:35:32.915209 waagent[2567]: 2025-10-13T05:35:32.915187Z INFO Daemon Daemon Copying ovf-env.xml Oct 13 05:35:32.926082 systemd[2620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:35:32.926255 systemd[2620]: Reached target sockets.target - Sockets. Oct 13 05:35:32.926343 systemd[2620]: Reached target basic.target - Basic System. Oct 13 05:35:32.926822 systemd[2620]: Reached target default.target - Main User Target. Oct 13 05:35:32.926847 systemd[2620]: Startup finished in 638ms. Oct 13 05:35:32.927011 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:35:32.932562 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:35:33.155408 login[2570]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:35:33.159734 systemd-logind[2449]: New session 1 of user core. Oct 13 05:35:33.165496 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:35:33.216655 waagent[2567]: 2025-10-13T05:35:33.214711Z INFO Daemon Daemon Successfully mounted dvd Oct 13 05:35:33.335274 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 13 05:35:33.337978 waagent[2567]: 2025-10-13T05:35:33.337927Z INFO Daemon Daemon Detect protocol endpoint Oct 13 05:35:33.338244 waagent[2567]: 2025-10-13T05:35:33.338116Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 05:35:33.338244 waagent[2567]: 2025-10-13T05:35:33.338276Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 13 05:35:33.338244 waagent[2567]: 2025-10-13T05:35:33.338330Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 13 05:35:33.338244 waagent[2567]: 2025-10-13T05:35:33.338731Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 13 05:35:33.338244 waagent[2567]: 2025-10-13T05:35:33.339016Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 13 05:35:33.370547 waagent[2567]: 2025-10-13T05:35:33.370510Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 13 05:35:33.370967 waagent[2567]: 2025-10-13T05:35:33.370807Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 13 05:35:33.370967 waagent[2567]: 2025-10-13T05:35:33.370968Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 13 05:35:33.467108 waagent[2567]: 2025-10-13T05:35:33.466994Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 13 05:35:33.467344 waagent[2567]: 2025-10-13T05:35:33.467195Z INFO Daemon Daemon Forcing an update of the goal state. Oct 13 05:35:33.470695 waagent[2567]: 2025-10-13T05:35:33.470654Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 05:35:33.483740 waagent[2567]: 2025-10-13T05:35:33.483708Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Oct 13 05:35:33.488648 waagent[2567]: 2025-10-13T05:35:33.484205Z INFO Daemon Oct 13 05:35:33.488648 waagent[2567]: 2025-10-13T05:35:33.484661Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 05c6ba40-cc0e-4b59-b301-ea1ecfcc448d eTag: 399602943099069712 source: Fabric] Oct 13 05:35:33.488648 waagent[2567]: 2025-10-13T05:35:33.484935Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Oct 13 05:35:33.488648 waagent[2567]: 2025-10-13T05:35:33.485200Z INFO Daemon Oct 13 05:35:33.488648 waagent[2567]: 2025-10-13T05:35:33.485406Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Oct 13 05:35:33.488648 waagent[2567]: 2025-10-13T05:35:33.490138Z INFO Daemon Daemon Downloading artifacts profile blob Oct 13 05:35:33.582080 waagent[2567]: 2025-10-13T05:35:33.582031Z INFO Daemon Downloaded certificate {'thumbprint': '0F3596C231C10AC51F4ED6907FC0966516A441CE', 'hasPrivateKey': True} Oct 13 05:35:33.584620 waagent[2567]: 2025-10-13T05:35:33.584584Z INFO Daemon Fetch goal state completed Oct 13 05:35:33.591297 waagent[2567]: 2025-10-13T05:35:33.591251Z INFO Daemon Daemon Starting provisioning Oct 13 05:35:33.592372 waagent[2567]: 2025-10-13T05:35:33.592292Z INFO Daemon Daemon Handle ovf-env.xml. Oct 13 05:35:33.593052 waagent[2567]: 2025-10-13T05:35:33.592978Z INFO Daemon Daemon Set hostname [ci-4487.0.0-a-dfb3332019] Oct 13 05:35:33.674870 waagent[2567]: 2025-10-13T05:35:33.674826Z INFO Daemon Daemon Publish hostname [ci-4487.0.0-a-dfb3332019] Oct 13 05:35:33.681828 waagent[2567]: 2025-10-13T05:35:33.675105Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 13 05:35:33.681828 waagent[2567]: 2025-10-13T05:35:33.675346Z INFO Daemon Daemon Primary interface is [eth0] Oct 13 05:35:33.683008 systemd-networkd[2117]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:35:33.683016 systemd-networkd[2117]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:35:33.683074 systemd-networkd[2117]: eth0: DHCP lease lost Oct 13 05:35:33.698628 waagent[2567]: 2025-10-13T05:35:33.698582Z INFO Daemon Daemon Create user account if not exists Oct 13 05:35:33.700034 waagent[2567]: 2025-10-13T05:35:33.699469Z INFO Daemon Daemon User core already exists, skip useradd Oct 13 05:35:33.700034 waagent[2567]: 2025-10-13T05:35:33.699711Z INFO Daemon Daemon Configure sudoer Oct 13 05:35:33.705466 systemd-networkd[2117]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 13 05:35:33.714736 waagent[2567]: 2025-10-13T05:35:33.714687Z INFO Daemon Daemon Configure sshd Oct 13 05:35:33.773829 waagent[2567]: 2025-10-13T05:35:33.773741Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Oct 13 05:35:33.774098 waagent[2567]: 2025-10-13T05:35:33.773911Z INFO Daemon Daemon Deploy ssh public key. Oct 13 05:35:34.880808 waagent[2567]: 2025-10-13T05:35:34.880754Z INFO Daemon Daemon Provisioning complete Oct 13 05:35:34.890074 waagent[2567]: 2025-10-13T05:35:34.890036Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 13 05:35:34.891211 waagent[2567]: 2025-10-13T05:35:34.890247Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 13 05:35:34.891211 waagent[2567]: 2025-10-13T05:35:34.890530Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Oct 13 05:35:34.994265 waagent[2671]: 2025-10-13T05:35:34.994192Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Oct 13 05:35:34.994601 waagent[2671]: 2025-10-13T05:35:34.994294Z INFO ExtHandler ExtHandler OS: flatcar 4487.0.0 Oct 13 05:35:34.994601 waagent[2671]: 2025-10-13T05:35:34.994336Z INFO ExtHandler ExtHandler Python: 3.11.13 Oct 13 05:35:34.994601 waagent[2671]: 2025-10-13T05:35:34.994391Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Oct 13 05:35:35.050382 waagent[2671]: 2025-10-13T05:35:35.050318Z INFO ExtHandler ExtHandler Distro: flatcar-4487.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Oct 13 05:35:35.050517 waagent[2671]: 2025-10-13T05:35:35.050491Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:35:35.050570 waagent[2671]: 2025-10-13T05:35:35.050546Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:35:35.058716 waagent[2671]: 2025-10-13T05:35:35.058660Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 05:35:35.072458 waagent[2671]: 2025-10-13T05:35:35.072430Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Oct 13 05:35:35.072786 waagent[2671]: 2025-10-13T05:35:35.072757Z INFO ExtHandler Oct 13 05:35:35.072832 waagent[2671]: 2025-10-13T05:35:35.072809Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5d0f0359-ba4f-466d-b04e-362631be0abe eTag: 399602943099069712 source: Fabric] Oct 13 05:35:35.073030 waagent[2671]: 2025-10-13T05:35:35.073006Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 13 05:35:35.073351 waagent[2671]: 2025-10-13T05:35:35.073323Z INFO ExtHandler Oct 13 05:35:35.073408 waagent[2671]: 2025-10-13T05:35:35.073380Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 13 05:35:35.077137 waagent[2671]: 2025-10-13T05:35:35.077111Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 13 05:35:35.141568 waagent[2671]: 2025-10-13T05:35:35.141492Z INFO ExtHandler Downloaded certificate {'thumbprint': '0F3596C231C10AC51F4ED6907FC0966516A441CE', 'hasPrivateKey': True} Oct 13 05:35:35.141862 waagent[2671]: 2025-10-13T05:35:35.141833Z INFO ExtHandler Fetch goal state completed Oct 13 05:35:35.162612 waagent[2671]: 2025-10-13T05:35:35.162567Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Oct 13 05:35:35.166531 waagent[2671]: 2025-10-13T05:35:35.166486Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2671 Oct 13 05:35:35.166638 waagent[2671]: 2025-10-13T05:35:35.166613Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Oct 13 05:35:35.166875 waagent[2671]: 2025-10-13T05:35:35.166853Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Oct 13 05:35:35.167919 waagent[2671]: 2025-10-13T05:35:35.167887Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 13 05:35:35.168202 waagent[2671]: 2025-10-13T05:35:35.168177Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Oct 13 05:35:35.168314 waagent[2671]: 2025-10-13T05:35:35.168293Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Oct 13 05:35:35.168747 waagent[2671]: 2025-10-13T05:35:35.168718Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 13 05:35:35.274101 waagent[2671]: 2025-10-13T05:35:35.274070Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 13 05:35:35.274251 waagent[2671]: 2025-10-13T05:35:35.274227Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 13 05:35:35.279673 waagent[2671]: 2025-10-13T05:35:35.279524Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 13 05:35:35.284816 systemd[1]: Reload requested from client PID 2686 ('systemctl') (unit waagent.service)... Oct 13 05:35:35.284830 systemd[1]: Reloading... Oct 13 05:35:35.349479 zram_generator::config[2724]: No configuration found. Oct 13 05:35:35.373416 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#205 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:35:35.542667 systemd[1]: Reloading finished in 257 ms. Oct 13 05:35:35.557017 waagent[2671]: 2025-10-13T05:35:35.556956Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Oct 13 05:35:35.558786 waagent[2671]: 2025-10-13T05:35:35.557078Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Oct 13 05:35:36.150644 waagent[2671]: 2025-10-13T05:35:36.150573Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Oct 13 05:35:36.150962 waagent[2671]: 2025-10-13T05:35:36.150916Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Oct 13 05:35:36.151594 waagent[2671]: 2025-10-13T05:35:36.151538Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 13 05:35:36.151889 waagent[2671]: 2025-10-13T05:35:36.151853Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 13 05:35:36.152081 waagent[2671]: 2025-10-13T05:35:36.152049Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 13 05:35:36.152166 waagent[2671]: 2025-10-13T05:35:36.152138Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:35:36.152238 waagent[2671]: 2025-10-13T05:35:36.152178Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 13 05:35:36.152418 waagent[2671]: 2025-10-13T05:35:36.152320Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:35:36.152480 waagent[2671]: 2025-10-13T05:35:36.152450Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:35:36.152611 waagent[2671]: 2025-10-13T05:35:36.152591Z INFO EnvHandler ExtHandler Configure routes Oct 13 05:35:36.152710 waagent[2671]: 2025-10-13T05:35:36.152638Z INFO EnvHandler ExtHandler Gateway:None Oct 13 05:35:36.152710 waagent[2671]: 2025-10-13T05:35:36.152689Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:35:36.153030 waagent[2671]: 2025-10-13T05:35:36.152993Z INFO EnvHandler ExtHandler Routes:None Oct 13 05:35:36.153239 waagent[2671]: 2025-10-13T05:35:36.153194Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 13 05:35:36.153416 waagent[2671]: 2025-10-13T05:35:36.153381Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 13 05:35:36.153496 waagent[2671]: 2025-10-13T05:35:36.153465Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 13 05:35:36.153620 waagent[2671]: 2025-10-13T05:35:36.153599Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 13 05:35:36.153620 waagent[2671]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 13 05:35:36.153620 waagent[2671]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Oct 13 05:35:36.153620 waagent[2671]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 13 05:35:36.153620 waagent[2671]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:35:36.153620 waagent[2671]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:35:36.153620 waagent[2671]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:35:36.154403 waagent[2671]: 2025-10-13T05:35:36.154313Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 13 05:35:36.169836 waagent[2671]: 2025-10-13T05:35:36.169799Z INFO ExtHandler ExtHandler Oct 13 05:35:36.169911 waagent[2671]: 2025-10-13T05:35:36.169858Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 9462ff24-6ea8-4578-915b-e614991e4b73 correlation e2e45e01-16f9-4eab-980b-ee8648897803 created: 2025-10-13T05:34:10.869324Z] Oct 13 05:35:36.170157 waagent[2671]: 2025-10-13T05:35:36.170127Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 13 05:35:36.170596 waagent[2671]: 2025-10-13T05:35:36.170567Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Oct 13 05:35:36.204849 waagent[2671]: 2025-10-13T05:35:36.204806Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Oct 13 05:35:36.204849 waagent[2671]: Try `iptables -h' or 'iptables --help' for more information.) Oct 13 05:35:36.205139 waagent[2671]: 2025-10-13T05:35:36.205114Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4AB724AC-8209-47C9-972F-16802C1EFD06;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Oct 13 05:35:36.280093 waagent[2671]: 2025-10-13T05:35:36.280046Z INFO MonitorHandler ExtHandler Network interfaces: Oct 13 05:35:36.280093 waagent[2671]: Executing ['ip', '-a', '-o', 'link']: Oct 13 05:35:36.280093 waagent[2671]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 13 05:35:36.280093 waagent[2671]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:88:6f:c2 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7c1e52886fc2 Oct 13 05:35:36.280093 waagent[2671]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:88:6f:c2 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Oct 13 05:35:36.280093 waagent[2671]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 13 05:35:36.280093 waagent[2671]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 13 05:35:36.280093 waagent[2671]: 2: eth0 inet 10.200.8.45/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 13 05:35:36.280093 waagent[2671]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 13 05:35:36.280093 waagent[2671]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Oct 13 05:35:36.280093 waagent[2671]: 2: eth0 inet6 fe80::7e1e:52ff:fe88:6fc2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 13 05:35:36.324271 waagent[2671]: 2025-10-13T05:35:36.324227Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Oct 13 05:35:36.324271 waagent[2671]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:36.324271 waagent[2671]: pkts bytes target prot opt in out source destination Oct 13 05:35:36.324271 waagent[2671]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:36.324271 waagent[2671]: pkts bytes target prot opt in out source destination Oct 13 05:35:36.324271 waagent[2671]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:36.324271 waagent[2671]: pkts bytes target prot opt in out source destination Oct 13 05:35:36.324271 waagent[2671]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 05:35:36.324271 waagent[2671]: 7 762 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 05:35:36.324271 waagent[2671]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 05:35:36.326910 waagent[2671]: 2025-10-13T05:35:36.326864Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 13 05:35:36.326910 waagent[2671]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:36.326910 waagent[2671]: pkts bytes target prot opt in out source destination Oct 13 05:35:36.326910 waagent[2671]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:36.326910 waagent[2671]: pkts bytes target prot opt in out source destination Oct 13 05:35:36.326910 waagent[2671]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:35:36.326910 waagent[2671]: pkts bytes target prot opt in out source destination Oct 13 05:35:36.326910 waagent[2671]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 05:35:36.326910 waagent[2671]: 12 1408 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 05:35:36.326910 waagent[2671]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 05:35:41.869813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:35:41.871232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:35:43.104408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:35:43.107588 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:35:43.229124 kubelet[2826]: E1013 05:35:43.229075 2826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:35:43.231951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:35:43.232058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:35:43.232323 systemd[1]: kubelet.service: Consumed 137ms CPU time, 110.6M memory peak. Oct 13 05:35:52.609964 chronyd[2431]: Selected source PHC0 Oct 13 05:35:53.369783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:35:53.371163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:35:54.991306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:35:54.997570 (kubelet)[2841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:35:55.062426 kubelet[2841]: E1013 05:35:55.062363 2841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:35:55.063994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:35:55.064119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:35:55.064461 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.6M memory peak. Oct 13 05:35:56.925021 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:35:56.926325 systemd[1]: Started sshd@0-10.200.8.45:22-10.200.16.10:33482.service - OpenSSH per-connection server daemon (10.200.16.10:33482). Oct 13 05:35:57.864150 sshd[2849]: Accepted publickey for core from 10.200.16.10 port 33482 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:35:57.865235 sshd-session[2849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:35:57.869474 systemd-logind[2449]: New session 3 of user core. Oct 13 05:35:57.881520 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:35:58.429224 systemd[1]: Started sshd@1-10.200.8.45:22-10.200.16.10:33494.service - OpenSSH per-connection server daemon (10.200.16.10:33494). Oct 13 05:35:59.071168 sshd[2855]: Accepted publickey for core from 10.200.16.10 port 33494 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:35:59.072278 sshd-session[2855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:35:59.075851 systemd-logind[2449]: New session 4 of user core. Oct 13 05:35:59.086541 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:35:59.524717 sshd[2858]: Connection closed by 10.200.16.10 port 33494 Oct 13 05:35:59.525273 sshd-session[2855]: pam_unix(sshd:session): session closed for user core Oct 13 05:35:59.528410 systemd[1]: sshd@1-10.200.8.45:22-10.200.16.10:33494.service: Deactivated successfully. Oct 13 05:35:59.529846 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:35:59.530490 systemd-logind[2449]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:35:59.531626 systemd-logind[2449]: Removed session 4. Oct 13 05:35:59.635825 systemd[1]: Started sshd@2-10.200.8.45:22-10.200.16.10:33510.service - OpenSSH per-connection server daemon (10.200.16.10:33510). Oct 13 05:36:00.283573 sshd[2864]: Accepted publickey for core from 10.200.16.10 port 33510 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:36:00.284629 sshd-session[2864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:36:00.288807 systemd-logind[2449]: New session 5 of user core. Oct 13 05:36:00.295506 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:36:00.730065 sshd[2867]: Connection closed by 10.200.16.10 port 33510 Oct 13 05:36:00.730601 sshd-session[2864]: pam_unix(sshd:session): session closed for user core Oct 13 05:36:00.733867 systemd[1]: sshd@2-10.200.8.45:22-10.200.16.10:33510.service: Deactivated successfully. Oct 13 05:36:00.735328 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:36:00.735975 systemd-logind[2449]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:36:00.736963 systemd-logind[2449]: Removed session 5. Oct 13 05:36:00.845651 systemd[1]: Started sshd@3-10.200.8.45:22-10.200.16.10:59692.service - OpenSSH per-connection server daemon (10.200.16.10:59692). Oct 13 05:36:01.483591 sshd[2873]: Accepted publickey for core from 10.200.16.10 port 59692 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:36:01.484647 sshd-session[2873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:36:01.489084 systemd-logind[2449]: New session 6 of user core. Oct 13 05:36:01.494537 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:36:01.935006 sshd[2876]: Connection closed by 10.200.16.10 port 59692 Oct 13 05:36:01.935520 sshd-session[2873]: pam_unix(sshd:session): session closed for user core Oct 13 05:36:01.938576 systemd[1]: sshd@3-10.200.8.45:22-10.200.16.10:59692.service: Deactivated successfully. Oct 13 05:36:01.939988 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:36:01.940741 systemd-logind[2449]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:36:01.941713 systemd-logind[2449]: Removed session 6. Oct 13 05:36:02.051839 systemd[1]: Started sshd@4-10.200.8.45:22-10.200.16.10:59706.service - OpenSSH per-connection server daemon (10.200.16.10:59706). Oct 13 05:36:02.699673 sshd[2882]: Accepted publickey for core from 10.200.16.10 port 59706 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:36:02.700730 sshd-session[2882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:36:02.704450 systemd-logind[2449]: New session 7 of user core. Oct 13 05:36:02.714545 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:36:03.317837 sudo[2886]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:36:03.318058 sudo[2886]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:36:03.347119 sudo[2886]: pam_unix(sudo:session): session closed for user root Oct 13 05:36:03.449816 sshd[2885]: Connection closed by 10.200.16.10 port 59706 Oct 13 05:36:03.450457 sshd-session[2882]: pam_unix(sshd:session): session closed for user core Oct 13 05:36:03.453467 systemd[1]: sshd@4-10.200.8.45:22-10.200.16.10:59706.service: Deactivated successfully. Oct 13 05:36:03.454871 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:36:03.456021 systemd-logind[2449]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:36:03.457087 systemd-logind[2449]: Removed session 7. Oct 13 05:36:03.562940 systemd[1]: Started sshd@5-10.200.8.45:22-10.200.16.10:59716.service - OpenSSH per-connection server daemon (10.200.16.10:59716). Oct 13 05:36:04.208762 sshd[2892]: Accepted publickey for core from 10.200.16.10 port 59716 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:36:04.209944 sshd-session[2892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:36:04.214600 systemd-logind[2449]: New session 8 of user core. Oct 13 05:36:04.224511 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:36:04.284340 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Oct 13 05:36:04.559527 sudo[2897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:36:04.559759 sudo[2897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:36:04.569828 sudo[2897]: pam_unix(sudo:session): session closed for user root Oct 13 05:36:04.574340 sudo[2896]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:36:04.574579 sudo[2896]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:36:04.582420 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:36:04.608358 augenrules[2919]: No rules Oct 13 05:36:04.609253 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:36:04.609443 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:36:04.610169 sudo[2896]: pam_unix(sudo:session): session closed for user root Oct 13 05:36:04.712679 sshd[2895]: Connection closed by 10.200.16.10 port 59716 Oct 13 05:36:04.713090 sshd-session[2892]: pam_unix(sshd:session): session closed for user core Oct 13 05:36:04.716184 systemd[1]: sshd@5-10.200.8.45:22-10.200.16.10:59716.service: Deactivated successfully. Oct 13 05:36:04.717498 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:36:04.718080 systemd-logind[2449]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:36:04.719121 systemd-logind[2449]: Removed session 8. Oct 13 05:36:04.829988 systemd[1]: Started sshd@6-10.200.8.45:22-10.200.16.10:59722.service - OpenSSH per-connection server daemon (10.200.16.10:59722). Oct 13 05:36:05.119821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 05:36:05.121242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:05.472218 sshd[2928]: Accepted publickey for core from 10.200.16.10 port 59722 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:36:05.473082 sshd-session[2928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:36:05.478351 systemd-logind[2449]: New session 9 of user core. Oct 13 05:36:05.489811 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:36:05.632857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:05.638603 (kubelet)[2940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:36:05.674008 kubelet[2940]: E1013 05:36:05.673957 2940 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:36:05.675473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:36:05.675604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:36:05.675950 systemd[1]: kubelet.service: Consumed 129ms CPU time, 110.2M memory peak. Oct 13 05:36:05.820339 sudo[2947]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:36:05.820587 sudo[2947]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:36:08.332045 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:36:08.341680 (dockerd)[2966]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:36:09.224638 dockerd[2966]: time="2025-10-13T05:36:09.224421051Z" level=info msg="Starting up" Oct 13 05:36:09.227603 dockerd[2966]: time="2025-10-13T05:36:09.227522133Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:36:09.236756 dockerd[2966]: time="2025-10-13T05:36:09.236721485Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:36:09.331725 dockerd[2966]: time="2025-10-13T05:36:09.331696159Z" level=info msg="Loading containers: start." Oct 13 05:36:09.403393 kernel: Initializing XFRM netlink socket Oct 13 05:36:10.051779 systemd-networkd[2117]: docker0: Link UP Oct 13 05:36:10.083964 dockerd[2966]: time="2025-10-13T05:36:10.083926919Z" level=info msg="Loading containers: done." Oct 13 05:36:10.164023 dockerd[2966]: time="2025-10-13T05:36:10.163991887Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:36:10.164160 dockerd[2966]: time="2025-10-13T05:36:10.164066424Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:36:10.164160 dockerd[2966]: time="2025-10-13T05:36:10.164135786Z" level=info msg="Initializing buildkit" Oct 13 05:36:10.212168 dockerd[2966]: time="2025-10-13T05:36:10.212135567Z" level=info msg="Completed buildkit initialization" Oct 13 05:36:10.218976 dockerd[2966]: time="2025-10-13T05:36:10.218944907Z" level=info msg="Daemon has completed initialization" Oct 13 05:36:10.219126 dockerd[2966]: time="2025-10-13T05:36:10.219045032Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:36:10.219251 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:36:11.089671 containerd[2478]: time="2025-10-13T05:36:11.089634616Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:36:11.850061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286733694.mount: Deactivated successfully. Oct 13 05:36:13.081161 containerd[2478]: time="2025-10-13T05:36:13.081112144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:13.083806 containerd[2478]: time="2025-10-13T05:36:13.083780560Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Oct 13 05:36:13.087881 containerd[2478]: time="2025-10-13T05:36:13.087840387Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:13.091668 containerd[2478]: time="2025-10-13T05:36:13.091625430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:13.092473 containerd[2478]: time="2025-10-13T05:36:13.092275350Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.002604977s" Oct 13 05:36:13.092473 containerd[2478]: time="2025-10-13T05:36:13.092313299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 05:36:13.092947 containerd[2478]: time="2025-10-13T05:36:13.092929240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:36:14.296465 update_engine[2450]: I20251013 05:36:14.296407 2450 update_attempter.cc:509] Updating boot flags... Oct 13 05:36:14.312326 containerd[2478]: time="2025-10-13T05:36:14.312288402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:14.314633 containerd[2478]: time="2025-10-13T05:36:14.314520083Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Oct 13 05:36:14.319127 containerd[2478]: time="2025-10-13T05:36:14.319094956Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:14.330103 containerd[2478]: time="2025-10-13T05:36:14.330072415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:14.332401 containerd[2478]: time="2025-10-13T05:36:14.330974469Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.237969915s" Oct 13 05:36:14.332401 containerd[2478]: time="2025-10-13T05:36:14.331006422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 05:36:14.332401 containerd[2478]: time="2025-10-13T05:36:14.331889726Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:36:15.407842 containerd[2478]: time="2025-10-13T05:36:15.407793226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:15.410590 containerd[2478]: time="2025-10-13T05:36:15.410556469Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Oct 13 05:36:15.413861 containerd[2478]: time="2025-10-13T05:36:15.413822214Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:15.418444 containerd[2478]: time="2025-10-13T05:36:15.417631033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:15.418444 containerd[2478]: time="2025-10-13T05:36:15.418310135Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.086394556s" Oct 13 05:36:15.418444 containerd[2478]: time="2025-10-13T05:36:15.418337987Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 05:36:15.418978 containerd[2478]: time="2025-10-13T05:36:15.418958652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:36:15.869811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 13 05:36:15.871312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:16.422409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:16.429562 (kubelet)[3271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:36:16.467752 kubelet[3271]: E1013 05:36:16.467319 3271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:36:16.469826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:36:16.469955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:36:16.470538 systemd[1]: kubelet.service: Consumed 135ms CPU time, 110.3M memory peak. Oct 13 05:36:16.836970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018785452.mount: Deactivated successfully. Oct 13 05:36:17.115210 containerd[2478]: time="2025-10-13T05:36:17.115102287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:17.117280 containerd[2478]: time="2025-10-13T05:36:17.117241974Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Oct 13 05:36:17.120104 containerd[2478]: time="2025-10-13T05:36:17.120050885Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:17.127751 containerd[2478]: time="2025-10-13T05:36:17.127709142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:17.128138 containerd[2478]: time="2025-10-13T05:36:17.128115774Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.709129643s" Oct 13 05:36:17.128205 containerd[2478]: time="2025-10-13T05:36:17.128195093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 05:36:17.128698 containerd[2478]: time="2025-10-13T05:36:17.128675359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:36:17.717513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270673082.mount: Deactivated successfully. Oct 13 05:36:19.154924 containerd[2478]: time="2025-10-13T05:36:19.154875911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:19.162778 containerd[2478]: time="2025-10-13T05:36:19.162747287Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Oct 13 05:36:19.166492 containerd[2478]: time="2025-10-13T05:36:19.166451784Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:19.170634 containerd[2478]: time="2025-10-13T05:36:19.170590457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:19.171383 containerd[2478]: time="2025-10-13T05:36:19.171239506Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.042535924s" Oct 13 05:36:19.171383 containerd[2478]: time="2025-10-13T05:36:19.171270224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 05:36:19.171975 containerd[2478]: time="2025-10-13T05:36:19.171850211Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:36:19.669247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2356468645.mount: Deactivated successfully. Oct 13 05:36:19.693604 containerd[2478]: time="2025-10-13T05:36:19.693562889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:19.697089 containerd[2478]: time="2025-10-13T05:36:19.697056501Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Oct 13 05:36:19.700100 containerd[2478]: time="2025-10-13T05:36:19.700063700Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:19.704071 containerd[2478]: time="2025-10-13T05:36:19.704033447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:19.704659 containerd[2478]: time="2025-10-13T05:36:19.704531205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 532.652515ms" Oct 13 05:36:19.704659 containerd[2478]: time="2025-10-13T05:36:19.704563542Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 05:36:19.705053 containerd[2478]: time="2025-10-13T05:36:19.705024194Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:36:22.519977 containerd[2478]: time="2025-10-13T05:36:22.519912358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:22.527732 containerd[2478]: time="2025-10-13T05:36:22.527699409Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Oct 13 05:36:22.570287 containerd[2478]: time="2025-10-13T05:36:22.570041355Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:22.623269 containerd[2478]: time="2025-10-13T05:36:22.623225472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:22.624419 containerd[2478]: time="2025-10-13T05:36:22.624180773Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.919127567s" Oct 13 05:36:22.624419 containerd[2478]: time="2025-10-13T05:36:22.624210102Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 05:36:26.579519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 13 05:36:26.580937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:26.590568 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:36:26.590643 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:36:26.590887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:26.593687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:26.620939 systemd[1]: Reload requested from client PID 3414 ('systemctl') (unit session-9.scope)... Oct 13 05:36:26.621049 systemd[1]: Reloading... Oct 13 05:36:26.698415 zram_generator::config[3458]: No configuration found. Oct 13 05:36:26.926034 systemd[1]: Reloading finished in 304 ms. Oct 13 05:36:26.960842 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:36:26.960919 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:36:26.961161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:26.961204 systemd[1]: kubelet.service: Consumed 68ms CPU time, 65.4M memory peak. Oct 13 05:36:26.963017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:28.073751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:28.084616 (kubelet)[3529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:36:28.126866 kubelet[3529]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:36:28.126866 kubelet[3529]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:36:28.126866 kubelet[3529]: I1013 05:36:28.125691 3529 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:36:28.351932 kubelet[3529]: I1013 05:36:28.351834 3529 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:36:28.351932 kubelet[3529]: I1013 05:36:28.351857 3529 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:36:28.351932 kubelet[3529]: I1013 05:36:28.351880 3529 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:36:28.351932 kubelet[3529]: I1013 05:36:28.351886 3529 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:36:28.352391 kubelet[3529]: I1013 05:36:28.352349 3529 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:36:28.526518 kubelet[3529]: E1013 05:36:28.526429 3529 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:36:28.527021 kubelet[3529]: I1013 05:36:28.526975 3529 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:36:28.534064 kubelet[3529]: I1013 05:36:28.534041 3529 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:36:28.537104 kubelet[3529]: I1013 05:36:28.536545 3529 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:36:28.537104 kubelet[3529]: I1013 05:36:28.536710 3529 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:36:28.537104 kubelet[3529]: I1013 05:36:28.536739 3529 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-a-dfb3332019","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:36:28.537104 kubelet[3529]: I1013 05:36:28.536915 3529 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:36:28.537286 kubelet[3529]: I1013 05:36:28.536923 3529 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:36:28.537286 kubelet[3529]: I1013 05:36:28.536994 3529 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:36:28.571000 kubelet[3529]: I1013 05:36:28.570981 3529 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:36:28.571177 kubelet[3529]: I1013 05:36:28.571161 3529 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:36:28.571177 kubelet[3529]: I1013 05:36:28.571176 3529 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:36:28.571235 kubelet[3529]: I1013 05:36:28.571196 3529 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:36:28.571235 kubelet[3529]: I1013 05:36:28.571219 3529 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:36:28.576433 kubelet[3529]: E1013 05:36:28.576409 3529 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:36:28.576636 kubelet[3529]: E1013 05:36:28.576620 3529 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-a-dfb3332019&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:36:28.576761 kubelet[3529]: I1013 05:36:28.576751 3529 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:36:28.577295 kubelet[3529]: I1013 05:36:28.577273 3529 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:36:28.577358 kubelet[3529]: I1013 05:36:28.577307 3529 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:36:28.577358 kubelet[3529]: W1013 05:36:28.577350 3529 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:36:28.581269 kubelet[3529]: I1013 05:36:28.581015 3529 server.go:1262] "Started kubelet" Oct 13 05:36:28.582296 kubelet[3529]: I1013 05:36:28.581696 3529 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:36:28.586306 kubelet[3529]: E1013 05:36:28.584716 3529 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.45:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-a-dfb3332019.186df6410a7cbec5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-a-dfb3332019,UID:ci-4487.0.0-a-dfb3332019,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-a-dfb3332019,},FirstTimestamp:2025-10-13 05:36:28.580986565 +0000 UTC m=+0.492902660,LastTimestamp:2025-10-13 05:36:28.580986565 +0000 UTC m=+0.492902660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-a-dfb3332019,}" Oct 13 05:36:28.588148 kubelet[3529]: E1013 05:36:28.588126 3529 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:36:28.588335 kubelet[3529]: I1013 05:36:28.588319 3529 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:36:28.590399 kubelet[3529]: I1013 05:36:28.589157 3529 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:36:28.590399 kubelet[3529]: E1013 05:36:28.589339 3529 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-dfb3332019\" not found" Oct 13 05:36:28.590399 kubelet[3529]: I1013 05:36:28.589865 3529 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:36:28.591694 kubelet[3529]: I1013 05:36:28.591672 3529 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:36:28.591763 kubelet[3529]: I1013 05:36:28.591721 3529 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:36:28.593396 kubelet[3529]: I1013 05:36:28.593070 3529 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:36:28.593396 kubelet[3529]: I1013 05:36:28.593113 3529 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:36:28.593396 kubelet[3529]: I1013 05:36:28.593251 3529 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:36:28.593623 kubelet[3529]: I1013 05:36:28.593594 3529 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:36:28.595054 kubelet[3529]: E1013 05:36:28.594995 3529 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-dfb3332019?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="200ms" Oct 13 05:36:28.596123 kubelet[3529]: E1013 05:36:28.596094 3529 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:36:28.596204 kubelet[3529]: I1013 05:36:28.596195 3529 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:36:28.596230 kubelet[3529]: I1013 05:36:28.596206 3529 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:36:28.596294 kubelet[3529]: I1013 05:36:28.596276 3529 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:36:28.614544 kubelet[3529]: I1013 05:36:28.614296 3529 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:36:28.614544 kubelet[3529]: I1013 05:36:28.614309 3529 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:36:28.614544 kubelet[3529]: I1013 05:36:28.614328 3529 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:36:28.619552 kubelet[3529]: I1013 05:36:28.619535 3529 policy_none.go:49] "None policy: Start" Oct 13 05:36:28.619552 kubelet[3529]: I1013 05:36:28.619554 3529 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:36:28.619651 kubelet[3529]: I1013 05:36:28.619564 3529 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:36:28.627623 kubelet[3529]: I1013 05:36:28.627605 3529 policy_none.go:47] "Start" Oct 13 05:36:28.631343 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:36:28.639535 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:36:28.642528 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:36:28.654875 kubelet[3529]: E1013 05:36:28.654858 3529 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:36:28.654875 kubelet[3529]: I1013 05:36:28.654998 3529 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:36:28.654875 kubelet[3529]: I1013 05:36:28.655007 3529 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:36:28.654875 kubelet[3529]: I1013 05:36:28.655164 3529 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:36:28.656795 kubelet[3529]: E1013 05:36:28.656766 3529 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:36:28.656859 kubelet[3529]: E1013 05:36:28.656839 3529 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-a-dfb3332019\" not found" Oct 13 05:36:28.679311 kubelet[3529]: I1013 05:36:28.679207 3529 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:36:28.680676 kubelet[3529]: I1013 05:36:28.680653 3529 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:36:28.680676 kubelet[3529]: I1013 05:36:28.680678 3529 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:36:28.680759 kubelet[3529]: I1013 05:36:28.680701 3529 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:36:28.680759 kubelet[3529]: E1013 05:36:28.680733 3529 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 13 05:36:28.681383 kubelet[3529]: E1013 05:36:28.681269 3529 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:36:28.757304 kubelet[3529]: I1013 05:36:28.757286 3529 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.757716 kubelet[3529]: E1013 05:36:28.757680 3529 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.791424 systemd[1]: Created slice kubepods-burstable-podf4a24021e82513a1a8a7c5134be7baf6.slice - libcontainer container kubepods-burstable-podf4a24021e82513a1a8a7c5134be7baf6.slice. Oct 13 05:36:28.793467 kubelet[3529]: I1013 05:36:28.793431 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.793533 kubelet[3529]: I1013 05:36:28.793468 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4a24021e82513a1a8a7c5134be7baf6-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-a-dfb3332019\" (UID: \"f4a24021e82513a1a8a7c5134be7baf6\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.793533 kubelet[3529]: I1013 05:36:28.793487 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4a24021e82513a1a8a7c5134be7baf6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-a-dfb3332019\" (UID: \"f4a24021e82513a1a8a7c5134be7baf6\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.793533 kubelet[3529]: I1013 05:36:28.793505 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.793533 kubelet[3529]: I1013 05:36:28.793527 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.793632 kubelet[3529]: I1013 05:36:28.793545 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.793632 kubelet[3529]: I1013 05:36:28.793563 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d64ec225782d540069959b577e16e97-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-a-dfb3332019\" (UID: \"7d64ec225782d540069959b577e16e97\") " pod="kube-system/kube-scheduler-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.793632 kubelet[3529]: I1013 05:36:28.793579 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4a24021e82513a1a8a7c5134be7baf6-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-a-dfb3332019\" (UID: \"f4a24021e82513a1a8a7c5134be7baf6\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.793632 kubelet[3529]: I1013 05:36:28.793600 3529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.795533 kubelet[3529]: E1013 05:36:28.795508 3529 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-dfb3332019?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="400ms" Oct 13 05:36:28.798128 kubelet[3529]: E1013 05:36:28.798107 3529 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.802262 systemd[1]: Created slice kubepods-burstable-pod6b3fc71b58f3e72963d0482e406185be.slice - libcontainer container kubepods-burstable-pod6b3fc71b58f3e72963d0482e406185be.slice. Oct 13 05:36:28.811218 kubelet[3529]: E1013 05:36:28.811199 3529 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.814041 systemd[1]: Created slice kubepods-burstable-pod7d64ec225782d540069959b577e16e97.slice - libcontainer container kubepods-burstable-pod7d64ec225782d540069959b577e16e97.slice. Oct 13 05:36:28.815346 kubelet[3529]: E1013 05:36:28.815329 3529 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.959147 kubelet[3529]: I1013 05:36:28.959126 3529 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:28.959503 kubelet[3529]: E1013 05:36:28.959470 3529 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:29.105132 containerd[2478]: time="2025-10-13T05:36:29.105094364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-a-dfb3332019,Uid:f4a24021e82513a1a8a7c5134be7baf6,Namespace:kube-system,Attempt:0,}" Oct 13 05:36:29.122634 containerd[2478]: time="2025-10-13T05:36:29.122601412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-a-dfb3332019,Uid:6b3fc71b58f3e72963d0482e406185be,Namespace:kube-system,Attempt:0,}" Oct 13 05:36:29.127566 containerd[2478]: time="2025-10-13T05:36:29.127531131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-a-dfb3332019,Uid:7d64ec225782d540069959b577e16e97,Namespace:kube-system,Attempt:0,}" Oct 13 05:36:29.196470 kubelet[3529]: E1013 05:36:29.196438 3529 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-dfb3332019?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="800ms" Oct 13 05:36:29.361436 kubelet[3529]: I1013 05:36:29.361332 3529 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:29.361678 kubelet[3529]: E1013 05:36:29.361654 3529 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:29.449838 kubelet[3529]: E1013 05:36:29.449801 3529 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:36:29.792193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398036193.mount: Deactivated successfully. Oct 13 05:36:29.815735 containerd[2478]: time="2025-10-13T05:36:29.815694538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:36:29.844080 containerd[2478]: time="2025-10-13T05:36:29.844054365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 13 05:36:29.846721 containerd[2478]: time="2025-10-13T05:36:29.846689711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:36:29.849430 containerd[2478]: time="2025-10-13T05:36:29.849391776Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:36:29.854282 containerd[2478]: time="2025-10-13T05:36:29.854259834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:36:29.863186 containerd[2478]: time="2025-10-13T05:36:29.863150182Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:36:29.871949 containerd[2478]: time="2025-10-13T05:36:29.871924719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:36:29.872667 containerd[2478]: time="2025-10-13T05:36:29.872636872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 759.36757ms" Oct 13 05:36:29.880525 containerd[2478]: time="2025-10-13T05:36:29.880313429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:36:29.883799 containerd[2478]: time="2025-10-13T05:36:29.883772487Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 745.30386ms" Oct 13 05:36:29.892970 kubelet[3529]: E1013 05:36:29.892942 3529 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:36:29.895395 containerd[2478]: time="2025-10-13T05:36:29.895348536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 765.382888ms" Oct 13 05:36:29.923342 containerd[2478]: time="2025-10-13T05:36:29.923307652Z" level=info msg="connecting to shim 3a81a16a3fbfbe9b15bd0c2faabb862b60753dec995e3ae6ddcb8b59b2f5f6e1" address="unix:///run/containerd/s/3dccc174fe17dc01e09a6ff8e34fbe5616400a15832a8ecb803ef15624bfa548" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:29.947564 systemd[1]: Started cri-containerd-3a81a16a3fbfbe9b15bd0c2faabb862b60753dec995e3ae6ddcb8b59b2f5f6e1.scope - libcontainer container 3a81a16a3fbfbe9b15bd0c2faabb862b60753dec995e3ae6ddcb8b59b2f5f6e1. Oct 13 05:36:29.966640 kubelet[3529]: E1013 05:36:29.966611 3529 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:36:29.986049 containerd[2478]: time="2025-10-13T05:36:29.986014753Z" level=info msg="connecting to shim 316f522dfc2283e935fac24a3aa699a84f39992573ea44b28d8f0ca54262f1be" address="unix:///run/containerd/s/af726f9e4efaccb6d7775d6d1bd0a27ec98eba53bab172d443d8dc0ba09995b2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:29.997105 kubelet[3529]: E1013 05:36:29.997075 3529 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-dfb3332019?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="1.6s" Oct 13 05:36:30.011502 systemd[1]: Started cri-containerd-316f522dfc2283e935fac24a3aa699a84f39992573ea44b28d8f0ca54262f1be.scope - libcontainer container 316f522dfc2283e935fac24a3aa699a84f39992573ea44b28d8f0ca54262f1be. Oct 13 05:36:30.029323 containerd[2478]: time="2025-10-13T05:36:30.029299514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-a-dfb3332019,Uid:f4a24021e82513a1a8a7c5134be7baf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a81a16a3fbfbe9b15bd0c2faabb862b60753dec995e3ae6ddcb8b59b2f5f6e1\"" Oct 13 05:36:30.036947 containerd[2478]: time="2025-10-13T05:36:30.036626906Z" level=info msg="CreateContainer within sandbox \"3a81a16a3fbfbe9b15bd0c2faabb862b60753dec995e3ae6ddcb8b59b2f5f6e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:36:30.065761 containerd[2478]: time="2025-10-13T05:36:30.065108880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-a-dfb3332019,Uid:7d64ec225782d540069959b577e16e97,Namespace:kube-system,Attempt:0,} returns sandbox id \"316f522dfc2283e935fac24a3aa699a84f39992573ea44b28d8f0ca54262f1be\"" Oct 13 05:36:30.072674 containerd[2478]: time="2025-10-13T05:36:30.072650877Z" level=info msg="CreateContainer within sandbox \"316f522dfc2283e935fac24a3aa699a84f39992573ea44b28d8f0ca54262f1be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:36:30.072825 containerd[2478]: time="2025-10-13T05:36:30.072806044Z" level=info msg="connecting to shim b605a1273b24cba13f65d5073d01b0e99404021fd9c337b664142a25f960c3de" address="unix:///run/containerd/s/284c134c9939b0e366d22ab3533276b49b22cb69e3afd43192ad89d7099f84ce" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:30.077765 containerd[2478]: time="2025-10-13T05:36:30.077738865Z" level=info msg="Container 9ea9db6aa960c27c118d4dd5a854f3d535af39cc3d5afefbdbca7999abac4e9c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:30.084049 kubelet[3529]: E1013 05:36:30.084024 3529 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-a-dfb3332019&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:36:30.096504 systemd[1]: Started cri-containerd-b605a1273b24cba13f65d5073d01b0e99404021fd9c337b664142a25f960c3de.scope - libcontainer container b605a1273b24cba13f65d5073d01b0e99404021fd9c337b664142a25f960c3de. Oct 13 05:36:30.101179 containerd[2478]: time="2025-10-13T05:36:30.101076700Z" level=info msg="CreateContainer within sandbox \"3a81a16a3fbfbe9b15bd0c2faabb862b60753dec995e3ae6ddcb8b59b2f5f6e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9ea9db6aa960c27c118d4dd5a854f3d535af39cc3d5afefbdbca7999abac4e9c\"" Oct 13 05:36:30.102008 containerd[2478]: time="2025-10-13T05:36:30.101945633Z" level=info msg="StartContainer for \"9ea9db6aa960c27c118d4dd5a854f3d535af39cc3d5afefbdbca7999abac4e9c\"" Oct 13 05:36:30.103250 containerd[2478]: time="2025-10-13T05:36:30.103225958Z" level=info msg="connecting to shim 9ea9db6aa960c27c118d4dd5a854f3d535af39cc3d5afefbdbca7999abac4e9c" address="unix:///run/containerd/s/3dccc174fe17dc01e09a6ff8e34fbe5616400a15832a8ecb803ef15624bfa548" protocol=ttrpc version=3 Oct 13 05:36:30.122641 systemd[1]: Started cri-containerd-9ea9db6aa960c27c118d4dd5a854f3d535af39cc3d5afefbdbca7999abac4e9c.scope - libcontainer container 9ea9db6aa960c27c118d4dd5a854f3d535af39cc3d5afefbdbca7999abac4e9c. Oct 13 05:36:30.139313 containerd[2478]: time="2025-10-13T05:36:30.138655895Z" level=info msg="Container 4fa02613544fe499eb994102d20d8a57bb2f49ff53793d6a833db9bf7bbf1aff: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:30.159382 containerd[2478]: time="2025-10-13T05:36:30.158702239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-a-dfb3332019,Uid:6b3fc71b58f3e72963d0482e406185be,Namespace:kube-system,Attempt:0,} returns sandbox id \"b605a1273b24cba13f65d5073d01b0e99404021fd9c337b664142a25f960c3de\"" Oct 13 05:36:30.159456 containerd[2478]: time="2025-10-13T05:36:30.159068972Z" level=info msg="CreateContainer within sandbox \"316f522dfc2283e935fac24a3aa699a84f39992573ea44b28d8f0ca54262f1be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4fa02613544fe499eb994102d20d8a57bb2f49ff53793d6a833db9bf7bbf1aff\"" Oct 13 05:36:30.159997 containerd[2478]: time="2025-10-13T05:36:30.159975583Z" level=info msg="StartContainer for \"4fa02613544fe499eb994102d20d8a57bb2f49ff53793d6a833db9bf7bbf1aff\"" Oct 13 05:36:30.162623 containerd[2478]: time="2025-10-13T05:36:30.162593554Z" level=info msg="connecting to shim 4fa02613544fe499eb994102d20d8a57bb2f49ff53793d6a833db9bf7bbf1aff" address="unix:///run/containerd/s/af726f9e4efaccb6d7775d6d1bd0a27ec98eba53bab172d443d8dc0ba09995b2" protocol=ttrpc version=3 Oct 13 05:36:30.164970 kubelet[3529]: I1013 05:36:30.164864 3529 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:30.165355 kubelet[3529]: E1013 05:36:30.165278 3529 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:30.169314 containerd[2478]: time="2025-10-13T05:36:30.169250299Z" level=info msg="CreateContainer within sandbox \"b605a1273b24cba13f65d5073d01b0e99404021fd9c337b664142a25f960c3de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:36:30.184649 systemd[1]: Started cri-containerd-4fa02613544fe499eb994102d20d8a57bb2f49ff53793d6a833db9bf7bbf1aff.scope - libcontainer container 4fa02613544fe499eb994102d20d8a57bb2f49ff53793d6a833db9bf7bbf1aff. Oct 13 05:36:30.193722 containerd[2478]: time="2025-10-13T05:36:30.193695826Z" level=info msg="Container 225d54a1be9e57a79d7959cd03d47b86a9098fd71325c4ff17ccf58dec57d904: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:30.196509 containerd[2478]: time="2025-10-13T05:36:30.196484420Z" level=info msg="StartContainer for \"9ea9db6aa960c27c118d4dd5a854f3d535af39cc3d5afefbdbca7999abac4e9c\" returns successfully" Oct 13 05:36:30.214584 containerd[2478]: time="2025-10-13T05:36:30.214555739Z" level=info msg="CreateContainer within sandbox \"b605a1273b24cba13f65d5073d01b0e99404021fd9c337b664142a25f960c3de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"225d54a1be9e57a79d7959cd03d47b86a9098fd71325c4ff17ccf58dec57d904\"" Oct 13 05:36:30.215886 containerd[2478]: time="2025-10-13T05:36:30.215862183Z" level=info msg="StartContainer for \"225d54a1be9e57a79d7959cd03d47b86a9098fd71325c4ff17ccf58dec57d904\"" Oct 13 05:36:30.217613 containerd[2478]: time="2025-10-13T05:36:30.217539484Z" level=info msg="connecting to shim 225d54a1be9e57a79d7959cd03d47b86a9098fd71325c4ff17ccf58dec57d904" address="unix:///run/containerd/s/284c134c9939b0e366d22ab3533276b49b22cb69e3afd43192ad89d7099f84ce" protocol=ttrpc version=3 Oct 13 05:36:30.243583 systemd[1]: Started cri-containerd-225d54a1be9e57a79d7959cd03d47b86a9098fd71325c4ff17ccf58dec57d904.scope - libcontainer container 225d54a1be9e57a79d7959cd03d47b86a9098fd71325c4ff17ccf58dec57d904. Oct 13 05:36:30.272516 containerd[2478]: time="2025-10-13T05:36:30.272443385Z" level=info msg="StartContainer for \"4fa02613544fe499eb994102d20d8a57bb2f49ff53793d6a833db9bf7bbf1aff\" returns successfully" Oct 13 05:36:30.328143 containerd[2478]: time="2025-10-13T05:36:30.327849759Z" level=info msg="StartContainer for \"225d54a1be9e57a79d7959cd03d47b86a9098fd71325c4ff17ccf58dec57d904\" returns successfully" Oct 13 05:36:30.704589 kubelet[3529]: E1013 05:36:30.704562 3529 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:30.709040 kubelet[3529]: E1013 05:36:30.709020 3529 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:30.711745 kubelet[3529]: E1013 05:36:30.711725 3529 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:31.715229 kubelet[3529]: E1013 05:36:31.715197 3529 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:31.716496 kubelet[3529]: E1013 05:36:31.716464 3529 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:31.766871 kubelet[3529]: I1013 05:36:31.766820 3529 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:32.631360 kubelet[3529]: E1013 05:36:32.631311 3529 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.0-a-dfb3332019\" not found" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:32.707900 kubelet[3529]: I1013 05:36:32.707869 3529 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:32.789788 kubelet[3529]: I1013 05:36:32.789758 3529 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:32.823306 kubelet[3529]: E1013 05:36:32.823266 3529 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-a-dfb3332019\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:32.823306 kubelet[3529]: I1013 05:36:32.823300 3529 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:32.824730 kubelet[3529]: E1013 05:36:32.824702 3529 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:32.824730 kubelet[3529]: I1013 05:36:32.824726 3529 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:32.826057 kubelet[3529]: E1013 05:36:32.826035 3529 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-a-dfb3332019\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:33.032593 kubelet[3529]: I1013 05:36:33.032503 3529 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:33.034305 kubelet[3529]: E1013 05:36:33.034271 3529 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:33.577847 kubelet[3529]: I1013 05:36:33.577797 3529 apiserver.go:52] "Watching apiserver" Oct 13 05:36:33.592070 kubelet[3529]: I1013 05:36:33.592039 3529 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:36:35.127264 systemd[1]: Reload requested from client PID 3810 ('systemctl') (unit session-9.scope)... Oct 13 05:36:35.127280 systemd[1]: Reloading... Oct 13 05:36:35.213399 zram_generator::config[3854]: No configuration found. Oct 13 05:36:35.413176 systemd[1]: Reloading finished in 285 ms. Oct 13 05:36:35.437068 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:35.461174 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:36:35.461423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:35.461474 systemd[1]: kubelet.service: Consumed 600ms CPU time, 124M memory peak. Oct 13 05:36:35.462856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:36:35.950501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:36:35.957644 (kubelet)[3925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:36:36.000497 kubelet[3925]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:36:36.000497 kubelet[3925]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:36:36.000754 kubelet[3925]: I1013 05:36:36.000538 3925 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:36:36.005233 kubelet[3925]: I1013 05:36:36.005207 3925 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:36:36.005233 kubelet[3925]: I1013 05:36:36.005227 3925 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:36:36.005339 kubelet[3925]: I1013 05:36:36.005249 3925 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:36:36.005339 kubelet[3925]: I1013 05:36:36.005255 3925 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:36:36.006189 kubelet[3925]: I1013 05:36:36.006162 3925 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:36:36.007513 kubelet[3925]: I1013 05:36:36.007483 3925 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:36:36.012275 kubelet[3925]: I1013 05:36:36.012186 3925 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:36:36.017436 kubelet[3925]: I1013 05:36:36.017419 3925 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:36:36.019585 kubelet[3925]: I1013 05:36:36.019569 3925 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:36:36.019758 kubelet[3925]: I1013 05:36:36.019710 3925 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:36:36.019887 kubelet[3925]: I1013 05:36:36.019746 3925 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-a-dfb3332019","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:36:36.020010 kubelet[3925]: I1013 05:36:36.019891 3925 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:36:36.020010 kubelet[3925]: I1013 05:36:36.019900 3925 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:36:36.020010 kubelet[3925]: I1013 05:36:36.019921 3925 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:36:36.020935 kubelet[3925]: I1013 05:36:36.020922 3925 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:36:36.021073 kubelet[3925]: I1013 05:36:36.021062 3925 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:36:36.021117 kubelet[3925]: I1013 05:36:36.021081 3925 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:36:36.021117 kubelet[3925]: I1013 05:36:36.021102 3925 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:36:36.022692 kubelet[3925]: I1013 05:36:36.021123 3925 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:36:36.023906 kubelet[3925]: I1013 05:36:36.023874 3925 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:36:36.024263 kubelet[3925]: I1013 05:36:36.024249 3925 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:36:36.024311 kubelet[3925]: I1013 05:36:36.024277 3925 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:36:36.028222 kubelet[3925]: I1013 05:36:36.028166 3925 server.go:1262] "Started kubelet" Oct 13 05:36:36.028936 kubelet[3925]: I1013 05:36:36.028911 3925 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:36:36.029244 kubelet[3925]: I1013 05:36:36.029121 3925 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:36:36.029467 kubelet[3925]: I1013 05:36:36.029387 3925 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:36:36.030114 kubelet[3925]: I1013 05:36:36.030094 3925 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:36:36.030615 kubelet[3925]: I1013 05:36:36.030504 3925 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:36:36.038426 kubelet[3925]: I1013 05:36:36.033647 3925 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:36:36.050490 kubelet[3925]: I1013 05:36:36.033732 3925 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:36:36.050871 kubelet[3925]: I1013 05:36:36.050681 3925 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:36:36.051387 kubelet[3925]: E1013 05:36:36.051342 3925 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-dfb3332019\" not found" Oct 13 05:36:36.054490 kubelet[3925]: I1013 05:36:36.054474 3925 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:36:36.054604 kubelet[3925]: I1013 05:36:36.054594 3925 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:36:36.061057 kubelet[3925]: I1013 05:36:36.060995 3925 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:36:36.061182 kubelet[3925]: I1013 05:36:36.061166 3925 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:36:36.063966 kubelet[3925]: I1013 05:36:36.063890 3925 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:36:36.067604 kubelet[3925]: E1013 05:36:36.067586 3925 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:36:36.075039 kubelet[3925]: I1013 05:36:36.074995 3925 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:36:36.076394 kubelet[3925]: I1013 05:36:36.076323 3925 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:36:36.076394 kubelet[3925]: I1013 05:36:36.076342 3925 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:36:36.076577 kubelet[3925]: I1013 05:36:36.076486 3925 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:36:36.076577 kubelet[3925]: E1013 05:36:36.076535 3925 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:36:36.104945 kubelet[3925]: I1013 05:36:36.104622 3925 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:36:36.104945 kubelet[3925]: I1013 05:36:36.104633 3925 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:36:36.104945 kubelet[3925]: I1013 05:36:36.104648 3925 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:36:36.104945 kubelet[3925]: I1013 05:36:36.104744 3925 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:36:36.104945 kubelet[3925]: I1013 05:36:36.104752 3925 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:36:36.104945 kubelet[3925]: I1013 05:36:36.104765 3925 policy_none.go:49] "None policy: Start" Oct 13 05:36:36.104945 kubelet[3925]: I1013 05:36:36.104773 3925 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:36:36.104945 kubelet[3925]: I1013 05:36:36.104782 3925 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:36:36.106312 kubelet[3925]: I1013 05:36:36.105358 3925 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:36:36.106312 kubelet[3925]: I1013 05:36:36.105408 3925 policy_none.go:47] "Start" Oct 13 05:36:36.119988 kubelet[3925]: E1013 05:36:36.119971 3925 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:36:36.121289 kubelet[3925]: I1013 05:36:36.120096 3925 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:36:36.121289 kubelet[3925]: I1013 05:36:36.120135 3925 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:36:36.121289 kubelet[3925]: I1013 05:36:36.121187 3925 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:36:36.123006 kubelet[3925]: E1013 05:36:36.122987 3925 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:36:36.177243 kubelet[3925]: I1013 05:36:36.177227 3925 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.178690 kubelet[3925]: I1013 05:36:36.178656 3925 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.178945 kubelet[3925]: I1013 05:36:36.178916 3925 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.184009 kubelet[3925]: I1013 05:36:36.183994 3925 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:36.188135 kubelet[3925]: I1013 05:36:36.188112 3925 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:36.188475 kubelet[3925]: I1013 05:36:36.188437 3925 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:36.225038 kubelet[3925]: I1013 05:36:36.224962 3925 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.237573 kubelet[3925]: I1013 05:36:36.237515 3925 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.237690 kubelet[3925]: I1013 05:36:36.237661 3925 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.256593 kubelet[3925]: I1013 05:36:36.256574 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.256734 kubelet[3925]: I1013 05:36:36.256702 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.256826 kubelet[3925]: I1013 05:36:36.256815 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.256914 kubelet[3925]: I1013 05:36:36.256898 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d64ec225782d540069959b577e16e97-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-a-dfb3332019\" (UID: \"7d64ec225782d540069959b577e16e97\") " pod="kube-system/kube-scheduler-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.256978 kubelet[3925]: I1013 05:36:36.256954 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4a24021e82513a1a8a7c5134be7baf6-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-a-dfb3332019\" (UID: \"f4a24021e82513a1a8a7c5134be7baf6\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.257051 kubelet[3925]: I1013 05:36:36.257027 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4a24021e82513a1a8a7c5134be7baf6-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-a-dfb3332019\" (UID: \"f4a24021e82513a1a8a7c5134be7baf6\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.257112 kubelet[3925]: I1013 05:36:36.257105 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4a24021e82513a1a8a7c5134be7baf6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-a-dfb3332019\" (UID: \"f4a24021e82513a1a8a7c5134be7baf6\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.257158 kubelet[3925]: I1013 05:36:36.257144 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:36.257198 kubelet[3925]: I1013 05:36:36.257192 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b3fc71b58f3e72963d0482e406185be-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-a-dfb3332019\" (UID: \"6b3fc71b58f3e72963d0482e406185be\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:37.022678 kubelet[3925]: I1013 05:36:37.022647 3925 apiserver.go:52] "Watching apiserver" Oct 13 05:36:37.054894 kubelet[3925]: I1013 05:36:37.054857 3925 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:36:37.092557 kubelet[3925]: I1013 05:36:37.092503 3925 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:37.101447 kubelet[3925]: I1013 05:36:37.101426 3925 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:36:37.101530 kubelet[3925]: E1013 05:36:37.101488 3925 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-a-dfb3332019\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" Oct 13 05:36:37.108995 kubelet[3925]: I1013 05:36:37.108753 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-a-dfb3332019" podStartSLOduration=1.108739233 podStartE2EDuration="1.108739233s" podCreationTimestamp="2025-10-13 05:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:36:37.108151671 +0000 UTC m=+1.147026689" watchObservedRunningTime="2025-10-13 05:36:37.108739233 +0000 UTC m=+1.147614257" Oct 13 05:36:37.117858 kubelet[3925]: I1013 05:36:37.117820 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-dfb3332019" podStartSLOduration=1.117810057 podStartE2EDuration="1.117810057s" podCreationTimestamp="2025-10-13 05:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:36:37.117133793 +0000 UTC m=+1.156008808" watchObservedRunningTime="2025-10-13 05:36:37.117810057 +0000 UTC m=+1.156685077" Oct 13 05:36:37.136122 kubelet[3925]: I1013 05:36:37.135975 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-a-dfb3332019" podStartSLOduration=1.135961634 podStartE2EDuration="1.135961634s" podCreationTimestamp="2025-10-13 05:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:36:37.125869358 +0000 UTC m=+1.164744380" watchObservedRunningTime="2025-10-13 05:36:37.135961634 +0000 UTC m=+1.174836650" Oct 13 05:36:41.967345 kubelet[3925]: I1013 05:36:41.967311 3925 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:36:41.968045 containerd[2478]: time="2025-10-13T05:36:41.968011825Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:36:41.968294 kubelet[3925]: I1013 05:36:41.968211 3925 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:36:43.005148 systemd[1]: Created slice kubepods-besteffort-podd1cd38d9_5eb7_4beb_9e38_c4b8545bac8a.slice - libcontainer container kubepods-besteffort-podd1cd38d9_5eb7_4beb_9e38_c4b8545bac8a.slice. Oct 13 05:36:43.103726 kubelet[3925]: I1013 05:36:43.103701 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a-kube-proxy\") pod \"kube-proxy-94wz5\" (UID: \"d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a\") " pod="kube-system/kube-proxy-94wz5" Oct 13 05:36:43.104225 kubelet[3925]: I1013 05:36:43.104100 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a-lib-modules\") pod \"kube-proxy-94wz5\" (UID: \"d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a\") " pod="kube-system/kube-proxy-94wz5" Oct 13 05:36:43.104225 kubelet[3925]: I1013 05:36:43.104126 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8jh\" (UniqueName: \"kubernetes.io/projected/d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a-kube-api-access-rs8jh\") pod \"kube-proxy-94wz5\" (UID: \"d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a\") " pod="kube-system/kube-proxy-94wz5" Oct 13 05:36:43.104225 kubelet[3925]: I1013 05:36:43.104158 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a-xtables-lock\") pod \"kube-proxy-94wz5\" (UID: \"d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a\") " pod="kube-system/kube-proxy-94wz5" Oct 13 05:36:43.153154 systemd[1]: Created slice kubepods-besteffort-pod7e701721_41ed_4d45_b56e_9e7672d371bd.slice - libcontainer container kubepods-besteffort-pod7e701721_41ed_4d45_b56e_9e7672d371bd.slice. Oct 13 05:36:43.205111 kubelet[3925]: I1013 05:36:43.205073 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzqnb\" (UniqueName: \"kubernetes.io/projected/7e701721-41ed-4d45-b56e-9e7672d371bd-kube-api-access-qzqnb\") pod \"tigera-operator-db78d5bd4-k7j5b\" (UID: \"7e701721-41ed-4d45-b56e-9e7672d371bd\") " pod="tigera-operator/tigera-operator-db78d5bd4-k7j5b" Oct 13 05:36:43.205226 kubelet[3925]: I1013 05:36:43.205127 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e701721-41ed-4d45-b56e-9e7672d371bd-var-lib-calico\") pod \"tigera-operator-db78d5bd4-k7j5b\" (UID: \"7e701721-41ed-4d45-b56e-9e7672d371bd\") " pod="tigera-operator/tigera-operator-db78d5bd4-k7j5b" Oct 13 05:36:43.320182 containerd[2478]: time="2025-10-13T05:36:43.319866173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-94wz5,Uid:d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a,Namespace:kube-system,Attempt:0,}" Oct 13 05:36:43.379556 containerd[2478]: time="2025-10-13T05:36:43.379436661Z" level=info msg="connecting to shim efb29c944247c3895699a6167261207c16e5c75de2d0e8b32e85e2204ea76f83" address="unix:///run/containerd/s/672d25b774d2a8921c5d3cb9d5e0805806d5b023fbf782d2f2f8ef86b529ac4b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:43.407538 systemd[1]: Started cri-containerd-efb29c944247c3895699a6167261207c16e5c75de2d0e8b32e85e2204ea76f83.scope - libcontainer container efb29c944247c3895699a6167261207c16e5c75de2d0e8b32e85e2204ea76f83. Oct 13 05:36:43.428609 containerd[2478]: time="2025-10-13T05:36:43.428575614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-94wz5,Uid:d1cd38d9-5eb7-4beb-9e38-c4b8545bac8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"efb29c944247c3895699a6167261207c16e5c75de2d0e8b32e85e2204ea76f83\"" Oct 13 05:36:43.436817 containerd[2478]: time="2025-10-13T05:36:43.436792277Z" level=info msg="CreateContainer within sandbox \"efb29c944247c3895699a6167261207c16e5c75de2d0e8b32e85e2204ea76f83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:36:43.462830 containerd[2478]: time="2025-10-13T05:36:43.462796497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-k7j5b,Uid:7e701721-41ed-4d45-b56e-9e7672d371bd,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:36:43.463271 containerd[2478]: time="2025-10-13T05:36:43.463251055Z" level=info msg="Container 63688c17dc31990e1d4465bbe4ee7e6687a50c4a95bf5f58293fcfdce459fa05: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:43.488000 containerd[2478]: time="2025-10-13T05:36:43.487971247Z" level=info msg="CreateContainer within sandbox \"efb29c944247c3895699a6167261207c16e5c75de2d0e8b32e85e2204ea76f83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63688c17dc31990e1d4465bbe4ee7e6687a50c4a95bf5f58293fcfdce459fa05\"" Oct 13 05:36:43.488440 containerd[2478]: time="2025-10-13T05:36:43.488419261Z" level=info msg="StartContainer for \"63688c17dc31990e1d4465bbe4ee7e6687a50c4a95bf5f58293fcfdce459fa05\"" Oct 13 05:36:43.489417 containerd[2478]: time="2025-10-13T05:36:43.489390181Z" level=info msg="connecting to shim 63688c17dc31990e1d4465bbe4ee7e6687a50c4a95bf5f58293fcfdce459fa05" address="unix:///run/containerd/s/672d25b774d2a8921c5d3cb9d5e0805806d5b023fbf782d2f2f8ef86b529ac4b" protocol=ttrpc version=3 Oct 13 05:36:43.505500 systemd[1]: Started cri-containerd-63688c17dc31990e1d4465bbe4ee7e6687a50c4a95bf5f58293fcfdce459fa05.scope - libcontainer container 63688c17dc31990e1d4465bbe4ee7e6687a50c4a95bf5f58293fcfdce459fa05. Oct 13 05:36:43.519546 containerd[2478]: time="2025-10-13T05:36:43.519488434Z" level=info msg="connecting to shim 08f0c9423eef85c083245a906ea3cf966e978793b01f2635f9f050019bffbe9c" address="unix:///run/containerd/s/7bbdc161f9813dca3f6efc01a898f39e0e565a74c511981aaa73338278fde851" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:43.551719 systemd[1]: Started cri-containerd-08f0c9423eef85c083245a906ea3cf966e978793b01f2635f9f050019bffbe9c.scope - libcontainer container 08f0c9423eef85c083245a906ea3cf966e978793b01f2635f9f050019bffbe9c. Oct 13 05:36:43.552010 containerd[2478]: time="2025-10-13T05:36:43.551957689Z" level=info msg="StartContainer for \"63688c17dc31990e1d4465bbe4ee7e6687a50c4a95bf5f58293fcfdce459fa05\" returns successfully" Oct 13 05:36:43.607132 containerd[2478]: time="2025-10-13T05:36:43.607055502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-k7j5b,Uid:7e701721-41ed-4d45-b56e-9e7672d371bd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"08f0c9423eef85c083245a906ea3cf966e978793b01f2635f9f050019bffbe9c\"" Oct 13 05:36:43.608978 containerd[2478]: time="2025-10-13T05:36:43.608955034Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:36:45.164847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422178641.mount: Deactivated successfully. Oct 13 05:36:45.375590 kubelet[3925]: I1013 05:36:45.375382 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-94wz5" podStartSLOduration=3.37535412 podStartE2EDuration="3.37535412s" podCreationTimestamp="2025-10-13 05:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:36:44.125475599 +0000 UTC m=+8.164350617" watchObservedRunningTime="2025-10-13 05:36:45.37535412 +0000 UTC m=+9.414229142" Oct 13 05:36:45.627100 containerd[2478]: time="2025-10-13T05:36:45.627060457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:45.629411 containerd[2478]: time="2025-10-13T05:36:45.629386706Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:36:45.633397 containerd[2478]: time="2025-10-13T05:36:45.632858393Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:45.636263 containerd[2478]: time="2025-10-13T05:36:45.636220526Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:45.636931 containerd[2478]: time="2025-10-13T05:36:45.636645413Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.027659032s" Oct 13 05:36:45.636931 containerd[2478]: time="2025-10-13T05:36:45.636676135Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:36:45.645675 containerd[2478]: time="2025-10-13T05:36:45.645640948Z" level=info msg="CreateContainer within sandbox \"08f0c9423eef85c083245a906ea3cf966e978793b01f2635f9f050019bffbe9c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:36:45.663392 containerd[2478]: time="2025-10-13T05:36:45.663262255Z" level=info msg="Container d224c6e082ad9e33db8d9f75b2ea88dc128fdea9baf7e534b362a063b14868f3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:45.678501 containerd[2478]: time="2025-10-13T05:36:45.678475925Z" level=info msg="CreateContainer within sandbox \"08f0c9423eef85c083245a906ea3cf966e978793b01f2635f9f050019bffbe9c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d224c6e082ad9e33db8d9f75b2ea88dc128fdea9baf7e534b362a063b14868f3\"" Oct 13 05:36:45.679393 containerd[2478]: time="2025-10-13T05:36:45.678811750Z" level=info msg="StartContainer for \"d224c6e082ad9e33db8d9f75b2ea88dc128fdea9baf7e534b362a063b14868f3\"" Oct 13 05:36:45.679814 containerd[2478]: time="2025-10-13T05:36:45.679757972Z" level=info msg="connecting to shim d224c6e082ad9e33db8d9f75b2ea88dc128fdea9baf7e534b362a063b14868f3" address="unix:///run/containerd/s/7bbdc161f9813dca3f6efc01a898f39e0e565a74c511981aaa73338278fde851" protocol=ttrpc version=3 Oct 13 05:36:45.700512 systemd[1]: Started cri-containerd-d224c6e082ad9e33db8d9f75b2ea88dc128fdea9baf7e534b362a063b14868f3.scope - libcontainer container d224c6e082ad9e33db8d9f75b2ea88dc128fdea9baf7e534b362a063b14868f3. Oct 13 05:36:45.727204 containerd[2478]: time="2025-10-13T05:36:45.727180694Z" level=info msg="StartContainer for \"d224c6e082ad9e33db8d9f75b2ea88dc128fdea9baf7e534b362a063b14868f3\" returns successfully" Oct 13 05:36:46.367061 kubelet[3925]: I1013 05:36:46.366571 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-k7j5b" podStartSLOduration=1.337498636 podStartE2EDuration="3.366556245s" podCreationTimestamp="2025-10-13 05:36:43 +0000 UTC" firstStartedPulling="2025-10-13 05:36:43.608452284 +0000 UTC m=+7.647327285" lastFinishedPulling="2025-10-13 05:36:45.637509892 +0000 UTC m=+9.676384894" observedRunningTime="2025-10-13 05:36:46.136884147 +0000 UTC m=+10.175759166" watchObservedRunningTime="2025-10-13 05:36:46.366556245 +0000 UTC m=+10.405431291" Oct 13 05:36:51.459979 sudo[2947]: pam_unix(sudo:session): session closed for user root Oct 13 05:36:51.565907 sshd[2934]: Connection closed by 10.200.16.10 port 59722 Oct 13 05:36:51.566610 sshd-session[2928]: pam_unix(sshd:session): session closed for user core Oct 13 05:36:51.574067 systemd-logind[2449]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:36:51.575231 systemd[1]: sshd@6-10.200.8.45:22-10.200.16.10:59722.service: Deactivated successfully. Oct 13 05:36:51.579147 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:36:51.579685 systemd[1]: session-9.scope: Consumed 5.360s CPU time, 230.6M memory peak. Oct 13 05:36:51.583413 systemd-logind[2449]: Removed session 9. Oct 13 05:36:55.199197 systemd[1]: Created slice kubepods-besteffort-pod7e1b6ca6_e8aa_4f31_9c15_fa543b2709dd.slice - libcontainer container kubepods-besteffort-pod7e1b6ca6_e8aa_4f31_9c15_fa543b2709dd.slice. Oct 13 05:36:55.286383 kubelet[3925]: I1013 05:36:55.286242 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1b6ca6-e8aa-4f31-9c15-fa543b2709dd-tigera-ca-bundle\") pod \"calico-typha-6c8c656cb5-knggj\" (UID: \"7e1b6ca6-e8aa-4f31-9c15-fa543b2709dd\") " pod="calico-system/calico-typha-6c8c656cb5-knggj" Oct 13 05:36:55.286383 kubelet[3925]: I1013 05:36:55.286297 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7e1b6ca6-e8aa-4f31-9c15-fa543b2709dd-typha-certs\") pod \"calico-typha-6c8c656cb5-knggj\" (UID: \"7e1b6ca6-e8aa-4f31-9c15-fa543b2709dd\") " pod="calico-system/calico-typha-6c8c656cb5-knggj" Oct 13 05:36:55.286383 kubelet[3925]: I1013 05:36:55.286339 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khqmc\" (UniqueName: \"kubernetes.io/projected/7e1b6ca6-e8aa-4f31-9c15-fa543b2709dd-kube-api-access-khqmc\") pod \"calico-typha-6c8c656cb5-knggj\" (UID: \"7e1b6ca6-e8aa-4f31-9c15-fa543b2709dd\") " pod="calico-system/calico-typha-6c8c656cb5-knggj" Oct 13 05:36:55.519218 containerd[2478]: time="2025-10-13T05:36:55.518773442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8c656cb5-knggj,Uid:7e1b6ca6-e8aa-4f31-9c15-fa543b2709dd,Namespace:calico-system,Attempt:0,}" Oct 13 05:36:55.576280 containerd[2478]: time="2025-10-13T05:36:55.576246002Z" level=info msg="connecting to shim b86737ca3e598c4d1750167fb22cff2532c0a8041b6411d4f88393b5429588f2" address="unix:///run/containerd/s/e079d450ca89686c6df1b35367d9e6fb01c4fbd9ebf3df9c1e293a95150764a6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:55.576902 systemd[1]: Created slice kubepods-besteffort-pod45aef7a2_e73d_45f7_b34b_3d5ddbde1851.slice - libcontainer container kubepods-besteffort-pod45aef7a2_e73d_45f7_b34b_3d5ddbde1851.slice. Oct 13 05:36:55.592171 kubelet[3925]: I1013 05:36:55.591689 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-xtables-lock\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592171 kubelet[3925]: I1013 05:36:55.591769 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-var-lib-calico\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592171 kubelet[3925]: I1013 05:36:55.591789 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4bvb\" (UniqueName: \"kubernetes.io/projected/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-kube-api-access-z4bvb\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592171 kubelet[3925]: I1013 05:36:55.591826 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-cni-bin-dir\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592171 kubelet[3925]: I1013 05:36:55.591847 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-lib-modules\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592364 kubelet[3925]: I1013 05:36:55.591867 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-cni-log-dir\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592364 kubelet[3925]: I1013 05:36:55.591931 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-cni-net-dir\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592364 kubelet[3925]: I1013 05:36:55.591954 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-node-certs\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592364 kubelet[3925]: I1013 05:36:55.591969 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-policysync\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592364 kubelet[3925]: I1013 05:36:55.591984 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-tigera-ca-bundle\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592502 kubelet[3925]: I1013 05:36:55.592002 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-flexvol-driver-host\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.592502 kubelet[3925]: I1013 05:36:55.592034 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/45aef7a2-e73d-45f7-b34b-3d5ddbde1851-var-run-calico\") pod \"calico-node-q7594\" (UID: \"45aef7a2-e73d-45f7-b34b-3d5ddbde1851\") " pod="calico-system/calico-node-q7594" Oct 13 05:36:55.607604 systemd[1]: Started cri-containerd-b86737ca3e598c4d1750167fb22cff2532c0a8041b6411d4f88393b5429588f2.scope - libcontainer container b86737ca3e598c4d1750167fb22cff2532c0a8041b6411d4f88393b5429588f2. Oct 13 05:36:55.655816 containerd[2478]: time="2025-10-13T05:36:55.655761166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8c656cb5-knggj,Uid:7e1b6ca6-e8aa-4f31-9c15-fa543b2709dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"b86737ca3e598c4d1750167fb22cff2532c0a8041b6411d4f88393b5429588f2\"" Oct 13 05:36:55.657640 containerd[2478]: time="2025-10-13T05:36:55.657470091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:36:55.697497 kubelet[3925]: E1013 05:36:55.697476 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.697598 kubelet[3925]: W1013 05:36:55.697588 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.697653 kubelet[3925]: E1013 05:36:55.697645 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.698210 kubelet[3925]: E1013 05:36:55.697847 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.698210 kubelet[3925]: W1013 05:36:55.697857 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.698210 kubelet[3925]: E1013 05:36:55.697867 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.698210 kubelet[3925]: E1013 05:36:55.697992 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.698210 kubelet[3925]: W1013 05:36:55.697998 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.698210 kubelet[3925]: E1013 05:36:55.698016 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.703714 kubelet[3925]: E1013 05:36:55.703693 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.703714 kubelet[3925]: W1013 05:36:55.703713 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.703857 kubelet[3925]: E1013 05:36:55.703725 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.703924 kubelet[3925]: E1013 05:36:55.703916 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.703947 kubelet[3925]: W1013 05:36:55.703925 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.703947 kubelet[3925]: E1013 05:36:55.703933 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.715962 kubelet[3925]: E1013 05:36:55.715915 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.715962 kubelet[3925]: W1013 05:36:55.715927 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.715962 kubelet[3925]: E1013 05:36:55.715938 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.820402 kubelet[3925]: E1013 05:36:55.819360 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqvx9" podUID="dd86b0ea-d7f3-411b-ad14-d1033bcd0923" Oct 13 05:36:55.877178 kubelet[3925]: E1013 05:36:55.877144 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.877178 kubelet[3925]: W1013 05:36:55.877173 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.877351 kubelet[3925]: E1013 05:36:55.877187 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.877351 kubelet[3925]: E1013 05:36:55.877307 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.877351 kubelet[3925]: W1013 05:36:55.877314 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.877351 kubelet[3925]: E1013 05:36:55.877322 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.877500 kubelet[3925]: E1013 05:36:55.877443 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.877500 kubelet[3925]: W1013 05:36:55.877448 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.877500 kubelet[3925]: E1013 05:36:55.877456 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.877607 kubelet[3925]: E1013 05:36:55.877591 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.877607 kubelet[3925]: W1013 05:36:55.877601 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.877607 kubelet[3925]: E1013 05:36:55.877608 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.877718 kubelet[3925]: E1013 05:36:55.877709 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.877718 kubelet[3925]: W1013 05:36:55.877716 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.877793 kubelet[3925]: E1013 05:36:55.877723 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.877826 kubelet[3925]: E1013 05:36:55.877803 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.877826 kubelet[3925]: W1013 05:36:55.877808 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.877826 kubelet[3925]: E1013 05:36:55.877813 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.877918 kubelet[3925]: E1013 05:36:55.877894 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.877918 kubelet[3925]: W1013 05:36:55.877899 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.877918 kubelet[3925]: E1013 05:36:55.877905 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878018 kubelet[3925]: E1013 05:36:55.877988 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878018 kubelet[3925]: W1013 05:36:55.877993 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878018 kubelet[3925]: E1013 05:36:55.877999 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878104 kubelet[3925]: E1013 05:36:55.878082 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878104 kubelet[3925]: W1013 05:36:55.878087 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878104 kubelet[3925]: E1013 05:36:55.878092 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878198 kubelet[3925]: E1013 05:36:55.878173 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878198 kubelet[3925]: W1013 05:36:55.878177 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878198 kubelet[3925]: E1013 05:36:55.878183 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878285 kubelet[3925]: E1013 05:36:55.878264 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878285 kubelet[3925]: W1013 05:36:55.878269 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878285 kubelet[3925]: E1013 05:36:55.878274 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878398 kubelet[3925]: E1013 05:36:55.878355 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878398 kubelet[3925]: W1013 05:36:55.878360 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878398 kubelet[3925]: E1013 05:36:55.878376 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878491 kubelet[3925]: E1013 05:36:55.878466 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878491 kubelet[3925]: W1013 05:36:55.878470 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878491 kubelet[3925]: E1013 05:36:55.878476 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878586 kubelet[3925]: E1013 05:36:55.878555 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878586 kubelet[3925]: W1013 05:36:55.878560 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878586 kubelet[3925]: E1013 05:36:55.878565 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878682 kubelet[3925]: E1013 05:36:55.878647 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878682 kubelet[3925]: W1013 05:36:55.878652 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878682 kubelet[3925]: E1013 05:36:55.878658 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878786 kubelet[3925]: E1013 05:36:55.878752 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878786 kubelet[3925]: W1013 05:36:55.878757 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878786 kubelet[3925]: E1013 05:36:55.878763 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878878 kubelet[3925]: E1013 05:36:55.878855 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878878 kubelet[3925]: W1013 05:36:55.878859 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878878 kubelet[3925]: E1013 05:36:55.878866 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.878969 kubelet[3925]: E1013 05:36:55.878948 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.878969 kubelet[3925]: W1013 05:36:55.878952 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.878969 kubelet[3925]: E1013 05:36:55.878958 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.879058 kubelet[3925]: E1013 05:36:55.879039 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.879058 kubelet[3925]: W1013 05:36:55.879043 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.879058 kubelet[3925]: E1013 05:36:55.879048 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.879155 kubelet[3925]: E1013 05:36:55.879147 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.879155 kubelet[3925]: W1013 05:36:55.879152 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.879155 kubelet[3925]: E1013 05:36:55.879158 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.890249 containerd[2478]: time="2025-10-13T05:36:55.890221226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q7594,Uid:45aef7a2-e73d-45f7-b34b-3d5ddbde1851,Namespace:calico-system,Attempt:0,}" Oct 13 05:36:55.893732 kubelet[3925]: E1013 05:36:55.893716 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.893814 kubelet[3925]: W1013 05:36:55.893729 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.893814 kubelet[3925]: E1013 05:36:55.893746 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.893814 kubelet[3925]: I1013 05:36:55.893768 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r6xf\" (UniqueName: \"kubernetes.io/projected/dd86b0ea-d7f3-411b-ad14-d1033bcd0923-kube-api-access-6r6xf\") pod \"csi-node-driver-kqvx9\" (UID: \"dd86b0ea-d7f3-411b-ad14-d1033bcd0923\") " pod="calico-system/csi-node-driver-kqvx9" Oct 13 05:36:55.893928 kubelet[3925]: E1013 05:36:55.893896 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.893928 kubelet[3925]: W1013 05:36:55.893903 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.894070 kubelet[3925]: E1013 05:36:55.893934 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.894070 kubelet[3925]: I1013 05:36:55.893953 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dd86b0ea-d7f3-411b-ad14-d1033bcd0923-registration-dir\") pod \"csi-node-driver-kqvx9\" (UID: \"dd86b0ea-d7f3-411b-ad14-d1033bcd0923\") " pod="calico-system/csi-node-driver-kqvx9" Oct 13 05:36:55.894197 kubelet[3925]: E1013 05:36:55.894187 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.894197 kubelet[3925]: W1013 05:36:55.894196 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.894197 kubelet[3925]: E1013 05:36:55.894204 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.894327 kubelet[3925]: E1013 05:36:55.894306 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.894327 kubelet[3925]: W1013 05:36:55.894312 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.894327 kubelet[3925]: E1013 05:36:55.894319 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.894476 kubelet[3925]: E1013 05:36:55.894445 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.894476 kubelet[3925]: W1013 05:36:55.894452 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.894476 kubelet[3925]: E1013 05:36:55.894459 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.894571 kubelet[3925]: I1013 05:36:55.894480 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dd86b0ea-d7f3-411b-ad14-d1033bcd0923-socket-dir\") pod \"csi-node-driver-kqvx9\" (UID: \"dd86b0ea-d7f3-411b-ad14-d1033bcd0923\") " pod="calico-system/csi-node-driver-kqvx9" Oct 13 05:36:55.894641 kubelet[3925]: E1013 05:36:55.894622 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.894641 kubelet[3925]: W1013 05:36:55.894634 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.894694 kubelet[3925]: E1013 05:36:55.894644 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.894694 kubelet[3925]: I1013 05:36:55.894663 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd86b0ea-d7f3-411b-ad14-d1033bcd0923-kubelet-dir\") pod \"csi-node-driver-kqvx9\" (UID: \"dd86b0ea-d7f3-411b-ad14-d1033bcd0923\") " pod="calico-system/csi-node-driver-kqvx9" Oct 13 05:36:55.894771 kubelet[3925]: E1013 05:36:55.894760 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.894771 kubelet[3925]: W1013 05:36:55.894767 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.894824 kubelet[3925]: E1013 05:36:55.894774 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.894824 kubelet[3925]: I1013 05:36:55.894787 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dd86b0ea-d7f3-411b-ad14-d1033bcd0923-varrun\") pod \"csi-node-driver-kqvx9\" (UID: \"dd86b0ea-d7f3-411b-ad14-d1033bcd0923\") " pod="calico-system/csi-node-driver-kqvx9" Oct 13 05:36:55.894936 kubelet[3925]: E1013 05:36:55.894925 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.894936 kubelet[3925]: W1013 05:36:55.894934 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.894986 kubelet[3925]: E1013 05:36:55.894940 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.895069 kubelet[3925]: E1013 05:36:55.895051 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.895069 kubelet[3925]: W1013 05:36:55.895067 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.895119 kubelet[3925]: E1013 05:36:55.895073 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.895211 kubelet[3925]: E1013 05:36:55.895201 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.895211 kubelet[3925]: W1013 05:36:55.895209 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.895263 kubelet[3925]: E1013 05:36:55.895215 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.895363 kubelet[3925]: E1013 05:36:55.895342 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.895363 kubelet[3925]: W1013 05:36:55.895361 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.895363 kubelet[3925]: E1013 05:36:55.895378 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.895508 kubelet[3925]: E1013 05:36:55.895498 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.895508 kubelet[3925]: W1013 05:36:55.895505 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.895564 kubelet[3925]: E1013 05:36:55.895511 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.895628 kubelet[3925]: E1013 05:36:55.895618 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.895628 kubelet[3925]: W1013 05:36:55.895625 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.895679 kubelet[3925]: E1013 05:36:55.895631 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.895756 kubelet[3925]: E1013 05:36:55.895746 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.895756 kubelet[3925]: W1013 05:36:55.895753 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.895859 kubelet[3925]: E1013 05:36:55.895759 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.895859 kubelet[3925]: E1013 05:36:55.895846 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.895859 kubelet[3925]: W1013 05:36:55.895851 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.895859 kubelet[3925]: E1013 05:36:55.895857 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.967113 containerd[2478]: time="2025-10-13T05:36:55.966274414Z" level=info msg="connecting to shim a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19" address="unix:///run/containerd/s/690b893dc91fbb23bfc660c446a5746a44a2795436320e7d9d2ecfeb1ed267b1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:36:55.996324 kubelet[3925]: E1013 05:36:55.996254 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.996324 kubelet[3925]: W1013 05:36:55.996293 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.996324 kubelet[3925]: E1013 05:36:55.996307 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.996811 kubelet[3925]: E1013 05:36:55.996801 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.996894 kubelet[3925]: W1013 05:36:55.996854 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.996894 kubelet[3925]: E1013 05:36:55.996867 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.997136 kubelet[3925]: E1013 05:36:55.997129 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.997182 kubelet[3925]: W1013 05:36:55.997166 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.997182 kubelet[3925]: E1013 05:36:55.997173 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.997460 kubelet[3925]: E1013 05:36:55.997417 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.997460 kubelet[3925]: W1013 05:36:55.997424 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.997460 kubelet[3925]: E1013 05:36:55.997430 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.997646 kubelet[3925]: E1013 05:36:55.997630 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.997646 kubelet[3925]: W1013 05:36:55.997635 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.997646 kubelet[3925]: E1013 05:36:55.997641 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.997840 kubelet[3925]: E1013 05:36:55.997824 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.997840 kubelet[3925]: W1013 05:36:55.997829 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.997840 kubelet[3925]: E1013 05:36:55.997834 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.998025 kubelet[3925]: E1013 05:36:55.998020 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.998065 kubelet[3925]: W1013 05:36:55.998054 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.998065 kubelet[3925]: E1013 05:36:55.998060 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.998232 kubelet[3925]: E1013 05:36:55.998217 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.998232 kubelet[3925]: W1013 05:36:55.998221 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.998232 kubelet[3925]: E1013 05:36:55.998226 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.998494 kubelet[3925]: E1013 05:36:55.998349 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.998494 kubelet[3925]: W1013 05:36:55.998352 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.998494 kubelet[3925]: E1013 05:36:55.998357 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.998551 systemd[1]: Started cri-containerd-a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19.scope - libcontainer container a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19. Oct 13 05:36:55.998730 kubelet[3925]: E1013 05:36:55.998722 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.998821 kubelet[3925]: W1013 05:36:55.998778 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.998821 kubelet[3925]: E1013 05:36:55.998787 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.999027 kubelet[3925]: E1013 05:36:55.999011 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.999088 kubelet[3925]: W1013 05:36:55.999019 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.999088 kubelet[3925]: E1013 05:36:55.999076 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.999308 kubelet[3925]: E1013 05:36:55.999281 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.999308 kubelet[3925]: W1013 05:36:55.999289 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.999308 kubelet[3925]: E1013 05:36:55.999298 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:55.999715 kubelet[3925]: E1013 05:36:55.999688 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:55.999715 kubelet[3925]: W1013 05:36:55.999697 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:55.999715 kubelet[3925]: E1013 05:36:55.999706 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.000438 kubelet[3925]: E1013 05:36:56.000156 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.000438 kubelet[3925]: W1013 05:36:56.000229 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.000438 kubelet[3925]: E1013 05:36:56.000242 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.000771 kubelet[3925]: E1013 05:36:56.000750 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.000957 kubelet[3925]: W1013 05:36:56.000848 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.000957 kubelet[3925]: E1013 05:36:56.000864 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.001399 kubelet[3925]: E1013 05:36:56.001265 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.001399 kubelet[3925]: W1013 05:36:56.001359 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.001399 kubelet[3925]: E1013 05:36:56.001385 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.001906 kubelet[3925]: E1013 05:36:56.001812 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.001906 kubelet[3925]: W1013 05:36:56.001874 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.001906 kubelet[3925]: E1013 05:36:56.001886 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.002326 kubelet[3925]: E1013 05:36:56.002279 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.002326 kubelet[3925]: W1013 05:36:56.002289 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.002326 kubelet[3925]: E1013 05:36:56.002301 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.002754 kubelet[3925]: E1013 05:36:56.002725 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.002754 kubelet[3925]: W1013 05:36:56.002734 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.002754 kubelet[3925]: E1013 05:36:56.002744 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.003151 kubelet[3925]: E1013 05:36:56.003124 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.003151 kubelet[3925]: W1013 05:36:56.003133 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.003151 kubelet[3925]: E1013 05:36:56.003140 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.003467 kubelet[3925]: E1013 05:36:56.003451 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.003557 kubelet[3925]: W1013 05:36:56.003539 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.003557 kubelet[3925]: E1013 05:36:56.003548 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.004187 kubelet[3925]: E1013 05:36:56.004142 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.004187 kubelet[3925]: W1013 05:36:56.004153 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.004187 kubelet[3925]: E1013 05:36:56.004164 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.006404 kubelet[3925]: E1013 05:36:56.006065 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.006404 kubelet[3925]: W1013 05:36:56.006079 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.006404 kubelet[3925]: E1013 05:36:56.006088 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.006404 kubelet[3925]: E1013 05:36:56.006343 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.006404 kubelet[3925]: W1013 05:36:56.006349 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.006404 kubelet[3925]: E1013 05:36:56.006357 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.006739 kubelet[3925]: E1013 05:36:56.006730 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.006786 kubelet[3925]: W1013 05:36:56.006779 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.007107 kubelet[3925]: E1013 05:36:56.006830 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.011489 kubelet[3925]: E1013 05:36:56.011476 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:56.011564 kubelet[3925]: W1013 05:36:56.011556 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:56.011620 kubelet[3925]: E1013 05:36:56.011611 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:56.029903 containerd[2478]: time="2025-10-13T05:36:56.029871680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q7594,Uid:45aef7a2-e73d-45f7-b34b-3d5ddbde1851,Namespace:calico-system,Attempt:0,} returns sandbox id \"a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19\"" Oct 13 05:36:56.942324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4235310356.mount: Deactivated successfully. Oct 13 05:36:58.029395 containerd[2478]: time="2025-10-13T05:36:58.029346635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:58.033576 containerd[2478]: time="2025-10-13T05:36:58.033550740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:36:58.038650 containerd[2478]: time="2025-10-13T05:36:58.038610132Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:58.042553 containerd[2478]: time="2025-10-13T05:36:58.042509442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:58.043152 containerd[2478]: time="2025-10-13T05:36:58.042864545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.385366178s" Oct 13 05:36:58.043152 containerd[2478]: time="2025-10-13T05:36:58.042892408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:36:58.043768 containerd[2478]: time="2025-10-13T05:36:58.043747216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:36:58.064462 containerd[2478]: time="2025-10-13T05:36:58.064425769Z" level=info msg="CreateContainer within sandbox \"b86737ca3e598c4d1750167fb22cff2532c0a8041b6411d4f88393b5429588f2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:36:58.078752 kubelet[3925]: E1013 05:36:58.077748 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqvx9" podUID="dd86b0ea-d7f3-411b-ad14-d1033bcd0923" Oct 13 05:36:58.083487 containerd[2478]: time="2025-10-13T05:36:58.083460386Z" level=info msg="Container ae2afd78d01b73843ee15ce5d0abcc7322ae66829dd75e43ef5cd8e386f00712: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:58.088076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535298429.mount: Deactivated successfully. Oct 13 05:36:58.103620 containerd[2478]: time="2025-10-13T05:36:58.103572528Z" level=info msg="CreateContainer within sandbox \"b86737ca3e598c4d1750167fb22cff2532c0a8041b6411d4f88393b5429588f2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ae2afd78d01b73843ee15ce5d0abcc7322ae66829dd75e43ef5cd8e386f00712\"" Oct 13 05:36:58.108402 containerd[2478]: time="2025-10-13T05:36:58.107606040Z" level=info msg="StartContainer for \"ae2afd78d01b73843ee15ce5d0abcc7322ae66829dd75e43ef5cd8e386f00712\"" Oct 13 05:36:58.110393 containerd[2478]: time="2025-10-13T05:36:58.110351076Z" level=info msg="connecting to shim ae2afd78d01b73843ee15ce5d0abcc7322ae66829dd75e43ef5cd8e386f00712" address="unix:///run/containerd/s/e079d450ca89686c6df1b35367d9e6fb01c4fbd9ebf3df9c1e293a95150764a6" protocol=ttrpc version=3 Oct 13 05:36:58.134665 systemd[1]: Started cri-containerd-ae2afd78d01b73843ee15ce5d0abcc7322ae66829dd75e43ef5cd8e386f00712.scope - libcontainer container ae2afd78d01b73843ee15ce5d0abcc7322ae66829dd75e43ef5cd8e386f00712. Oct 13 05:36:58.181986 containerd[2478]: time="2025-10-13T05:36:58.181923726Z" level=info msg="StartContainer for \"ae2afd78d01b73843ee15ce5d0abcc7322ae66829dd75e43ef5cd8e386f00712\" returns successfully" Oct 13 05:36:59.197975 kubelet[3925]: E1013 05:36:59.197855 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.197975 kubelet[3925]: W1013 05:36:59.197875 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.197975 kubelet[3925]: E1013 05:36:59.197892 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.198620 kubelet[3925]: E1013 05:36:59.198500 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.198620 kubelet[3925]: W1013 05:36:59.198514 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.198620 kubelet[3925]: E1013 05:36:59.198527 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.199282 kubelet[3925]: E1013 05:36:59.199258 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.199282 kubelet[3925]: W1013 05:36:59.199278 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.199412 kubelet[3925]: E1013 05:36:59.199300 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.199687 kubelet[3925]: E1013 05:36:59.199676 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.199817 kubelet[3925]: W1013 05:36:59.199806 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.199849 kubelet[3925]: E1013 05:36:59.199823 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.200555 kubelet[3925]: E1013 05:36:59.200538 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.200555 kubelet[3925]: W1013 05:36:59.200553 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.200646 kubelet[3925]: E1013 05:36:59.200566 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.201113 kubelet[3925]: E1013 05:36:59.200962 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.201113 kubelet[3925]: W1013 05:36:59.201018 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.201113 kubelet[3925]: E1013 05:36:59.201030 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.202263 kubelet[3925]: E1013 05:36:59.202243 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.202263 kubelet[3925]: W1013 05:36:59.202259 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.202474 kubelet[3925]: E1013 05:36:59.202272 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.203112 kubelet[3925]: E1013 05:36:59.202830 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.203112 kubelet[3925]: W1013 05:36:59.202898 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.203643 kubelet[3925]: E1013 05:36:59.202912 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.203907 kubelet[3925]: E1013 05:36:59.203806 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.203907 kubelet[3925]: W1013 05:36:59.203819 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.203907 kubelet[3925]: E1013 05:36:59.203832 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.204471 kubelet[3925]: E1013 05:36:59.204320 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.204471 kubelet[3925]: W1013 05:36:59.204332 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.204471 kubelet[3925]: E1013 05:36:59.204344 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.205247 kubelet[3925]: E1013 05:36:59.205002 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.205247 kubelet[3925]: W1013 05:36:59.205014 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.205247 kubelet[3925]: E1013 05:36:59.205026 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.205708 kubelet[3925]: E1013 05:36:59.205691 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.205874 kubelet[3925]: W1013 05:36:59.205863 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.206220 kubelet[3925]: E1013 05:36:59.205970 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.206888 kubelet[3925]: E1013 05:36:59.206507 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.206986 kubelet[3925]: W1013 05:36:59.206971 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.207051 kubelet[3925]: E1013 05:36:59.207029 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.207793 kubelet[3925]: E1013 05:36:59.207532 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.207793 kubelet[3925]: W1013 05:36:59.207544 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.207793 kubelet[3925]: E1013 05:36:59.207556 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.208503 kubelet[3925]: E1013 05:36:59.208430 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.208503 kubelet[3925]: W1013 05:36:59.208444 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.208503 kubelet[3925]: E1013 05:36:59.208458 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.221905 kubelet[3925]: E1013 05:36:59.221651 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.221905 kubelet[3925]: W1013 05:36:59.221667 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.221905 kubelet[3925]: E1013 05:36:59.221681 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.222823 kubelet[3925]: E1013 05:36:59.222574 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.222823 kubelet[3925]: W1013 05:36:59.222671 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.222823 kubelet[3925]: E1013 05:36:59.222685 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.224076 kubelet[3925]: E1013 05:36:59.223890 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.224076 kubelet[3925]: W1013 05:36:59.223904 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.224076 kubelet[3925]: E1013 05:36:59.223918 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.225191 kubelet[3925]: E1013 05:36:59.224256 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.225191 kubelet[3925]: W1013 05:36:59.224266 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.225191 kubelet[3925]: E1013 05:36:59.224280 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.225191 kubelet[3925]: E1013 05:36:59.224528 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.225191 kubelet[3925]: W1013 05:36:59.224536 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.225191 kubelet[3925]: E1013 05:36:59.224602 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.225191 kubelet[3925]: E1013 05:36:59.224994 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.225191 kubelet[3925]: W1013 05:36:59.225002 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.225191 kubelet[3925]: E1013 05:36:59.225011 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.225448 kubelet[3925]: E1013 05:36:59.225212 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.225448 kubelet[3925]: W1013 05:36:59.225219 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.225448 kubelet[3925]: E1013 05:36:59.225228 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.226172 kubelet[3925]: E1013 05:36:59.225664 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.226172 kubelet[3925]: W1013 05:36:59.225675 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.226172 kubelet[3925]: E1013 05:36:59.225686 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.226172 kubelet[3925]: E1013 05:36:59.226135 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.226172 kubelet[3925]: W1013 05:36:59.226146 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.226172 kubelet[3925]: E1013 05:36:59.226159 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.227230 kubelet[3925]: E1013 05:36:59.227196 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.227230 kubelet[3925]: W1013 05:36:59.227210 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.227470 kubelet[3925]: E1013 05:36:59.227327 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.228134 kubelet[3925]: E1013 05:36:59.227903 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.228134 kubelet[3925]: W1013 05:36:59.227916 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.228134 kubelet[3925]: E1013 05:36:59.227929 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.228909 kubelet[3925]: E1013 05:36:59.228591 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.228909 kubelet[3925]: W1013 05:36:59.228683 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.228909 kubelet[3925]: E1013 05:36:59.228695 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.229677 kubelet[3925]: E1013 05:36:59.229585 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.230352 kubelet[3925]: W1013 05:36:59.229815 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.230352 kubelet[3925]: E1013 05:36:59.229834 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.230352 kubelet[3925]: E1013 05:36:59.230218 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.230352 kubelet[3925]: W1013 05:36:59.230228 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.230352 kubelet[3925]: E1013 05:36:59.230241 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.230792 kubelet[3925]: E1013 05:36:59.230623 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.230792 kubelet[3925]: W1013 05:36:59.230634 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.230792 kubelet[3925]: E1013 05:36:59.230644 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.231339 kubelet[3925]: E1013 05:36:59.231237 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.231339 kubelet[3925]: W1013 05:36:59.231259 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.231339 kubelet[3925]: E1013 05:36:59.231271 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.232237 kubelet[3925]: E1013 05:36:59.232116 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.232237 kubelet[3925]: W1013 05:36:59.232211 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.232237 kubelet[3925]: E1013 05:36:59.232223 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.232984 kubelet[3925]: E1013 05:36:59.232862 3925 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:36:59.232984 kubelet[3925]: W1013 05:36:59.232873 3925 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:36:59.232984 kubelet[3925]: E1013 05:36:59.232883 3925 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:36:59.255919 containerd[2478]: time="2025-10-13T05:36:59.255882205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:59.259204 containerd[2478]: time="2025-10-13T05:36:59.259175521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:36:59.261855 containerd[2478]: time="2025-10-13T05:36:59.261813067Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:59.265234 containerd[2478]: time="2025-10-13T05:36:59.265122320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:36:59.265579 containerd[2478]: time="2025-10-13T05:36:59.265554338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.221627321s" Oct 13 05:36:59.265624 containerd[2478]: time="2025-10-13T05:36:59.265586500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:36:59.273316 containerd[2478]: time="2025-10-13T05:36:59.273285277Z" level=info msg="CreateContainer within sandbox \"a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:36:59.295386 containerd[2478]: time="2025-10-13T05:36:59.292429814Z" level=info msg="Container ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:36:59.312704 containerd[2478]: time="2025-10-13T05:36:59.312673653Z" level=info msg="CreateContainer within sandbox \"a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa\"" Oct 13 05:36:59.314154 containerd[2478]: time="2025-10-13T05:36:59.313232704Z" level=info msg="StartContainer for \"ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa\"" Oct 13 05:36:59.315650 containerd[2478]: time="2025-10-13T05:36:59.315625991Z" level=info msg="connecting to shim ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa" address="unix:///run/containerd/s/690b893dc91fbb23bfc660c446a5746a44a2795436320e7d9d2ecfeb1ed267b1" protocol=ttrpc version=3 Oct 13 05:36:59.342913 systemd[1]: Started cri-containerd-ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa.scope - libcontainer container ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa. Oct 13 05:36:59.385242 containerd[2478]: time="2025-10-13T05:36:59.385218077Z" level=info msg="StartContainer for \"ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa\" returns successfully" Oct 13 05:36:59.391648 systemd[1]: cri-containerd-ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa.scope: Deactivated successfully. Oct 13 05:36:59.393828 containerd[2478]: time="2025-10-13T05:36:59.393808810Z" level=info msg="received exit event container_id:\"ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa\" id:\"ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa\" pid:4597 exited_at:{seconds:1760333819 nanos:393515932}" Oct 13 05:36:59.394163 containerd[2478]: time="2025-10-13T05:36:59.394141177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa\" id:\"ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa\" pid:4597 exited_at:{seconds:1760333819 nanos:393515932}" Oct 13 05:36:59.414492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae674f072757ad6af5aef4399794a6bfd757b0b20f6cae59692705319921eeaa-rootfs.mount: Deactivated successfully. Oct 13 05:37:00.078691 kubelet[3925]: E1013 05:37:00.078604 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqvx9" podUID="dd86b0ea-d7f3-411b-ad14-d1033bcd0923" Oct 13 05:37:00.142528 kubelet[3925]: I1013 05:37:00.142504 3925 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:37:00.157195 kubelet[3925]: I1013 05:37:00.157087 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c8c656cb5-knggj" podStartSLOduration=2.770367291 podStartE2EDuration="5.157070908s" podCreationTimestamp="2025-10-13 05:36:55 +0000 UTC" firstStartedPulling="2025-10-13 05:36:55.65686811 +0000 UTC m=+19.695743112" lastFinishedPulling="2025-10-13 05:36:58.043571724 +0000 UTC m=+22.082446729" observedRunningTime="2025-10-13 05:36:59.156656364 +0000 UTC m=+23.195531376" watchObservedRunningTime="2025-10-13 05:37:00.157070908 +0000 UTC m=+24.195945930" Oct 13 05:37:02.078163 kubelet[3925]: E1013 05:37:02.077516 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqvx9" podUID="dd86b0ea-d7f3-411b-ad14-d1033bcd0923" Oct 13 05:37:02.149045 containerd[2478]: time="2025-10-13T05:37:02.148979855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:37:04.077994 kubelet[3925]: E1013 05:37:04.077618 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqvx9" podUID="dd86b0ea-d7f3-411b-ad14-d1033bcd0923" Oct 13 05:37:05.057086 containerd[2478]: time="2025-10-13T05:37:05.057052001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:05.059560 containerd[2478]: time="2025-10-13T05:37:05.059536798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:37:05.063117 containerd[2478]: time="2025-10-13T05:37:05.063091948Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:05.066986 containerd[2478]: time="2025-10-13T05:37:05.066959936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:05.067332 containerd[2478]: time="2025-10-13T05:37:05.067308921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.917838737s" Oct 13 05:37:05.067394 containerd[2478]: time="2025-10-13T05:37:05.067339447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:37:05.074285 containerd[2478]: time="2025-10-13T05:37:05.074242876Z" level=info msg="CreateContainer within sandbox \"a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:37:05.095054 containerd[2478]: time="2025-10-13T05:37:05.093876042Z" level=info msg="Container f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:05.112794 containerd[2478]: time="2025-10-13T05:37:05.112768736Z" level=info msg="CreateContainer within sandbox \"a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8\"" Oct 13 05:37:05.113218 containerd[2478]: time="2025-10-13T05:37:05.113189376Z" level=info msg="StartContainer for \"f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8\"" Oct 13 05:37:05.115141 containerd[2478]: time="2025-10-13T05:37:05.115104663Z" level=info msg="connecting to shim f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8" address="unix:///run/containerd/s/690b893dc91fbb23bfc660c446a5746a44a2795436320e7d9d2ecfeb1ed267b1" protocol=ttrpc version=3 Oct 13 05:37:05.133716 systemd[1]: Started cri-containerd-f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8.scope - libcontainer container f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8. Oct 13 05:37:05.176937 containerd[2478]: time="2025-10-13T05:37:05.176867095Z" level=info msg="StartContainer for \"f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8\" returns successfully" Oct 13 05:37:06.077876 kubelet[3925]: E1013 05:37:06.077193 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqvx9" podUID="dd86b0ea-d7f3-411b-ad14-d1033bcd0923" Oct 13 05:37:06.472131 systemd[1]: cri-containerd-f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8.scope: Deactivated successfully. Oct 13 05:37:06.472449 systemd[1]: cri-containerd-f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8.scope: Consumed 400ms CPU time, 190.8M memory peak, 171.3M written to disk. Oct 13 05:37:06.474627 containerd[2478]: time="2025-10-13T05:37:06.474571132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8\" id:\"f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8\" pid:4658 exited_at:{seconds:1760333826 nanos:474245089}" Oct 13 05:37:06.475119 containerd[2478]: time="2025-10-13T05:37:06.474969107Z" level=info msg="received exit event container_id:\"f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8\" id:\"f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8\" pid:4658 exited_at:{seconds:1760333826 nanos:474245089}" Oct 13 05:37:06.495640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f90762b8489f8823381dfd18ac8d24eab33ea92810b88f260685fdf60f5ed0f8-rootfs.mount: Deactivated successfully. Oct 13 05:37:06.497408 kubelet[3925]: I1013 05:37:06.497346 3925 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:37:06.763765 systemd[1]: Created slice kubepods-besteffort-pode25fbd1e_2e73_4593_82da_729001cdfac5.slice - libcontainer container kubepods-besteffort-pode25fbd1e_2e73_4593_82da_729001cdfac5.slice. Oct 13 05:37:06.875320 kubelet[3925]: I1013 05:37:06.875284 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snng9\" (UniqueName: \"kubernetes.io/projected/e25fbd1e-2e73-4593-82da-729001cdfac5-kube-api-access-snng9\") pod \"whisker-6776779fdf-8klzf\" (UID: \"e25fbd1e-2e73-4593-82da-729001cdfac5\") " pod="calico-system/whisker-6776779fdf-8klzf" Oct 13 05:37:06.875320 kubelet[3925]: I1013 05:37:06.875319 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e25fbd1e-2e73-4593-82da-729001cdfac5-whisker-backend-key-pair\") pod \"whisker-6776779fdf-8klzf\" (UID: \"e25fbd1e-2e73-4593-82da-729001cdfac5\") " pod="calico-system/whisker-6776779fdf-8klzf" Oct 13 05:37:06.915219 kubelet[3925]: I1013 05:37:06.875338 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e25fbd1e-2e73-4593-82da-729001cdfac5-whisker-ca-bundle\") pod \"whisker-6776779fdf-8klzf\" (UID: \"e25fbd1e-2e73-4593-82da-729001cdfac5\") " pod="calico-system/whisker-6776779fdf-8klzf" Oct 13 05:37:06.925163 systemd[1]: Created slice kubepods-burstable-pod601cb2d1_beec_49e2_8f8f_894578da294a.slice - libcontainer container kubepods-burstable-pod601cb2d1_beec_49e2_8f8f_894578da294a.slice. Oct 13 05:37:06.969609 systemd[1]: Created slice kubepods-besteffort-pod744eaf3d_7c57_4fd9_9ca6_66f0ba9dfcec.slice - libcontainer container kubepods-besteffort-pod744eaf3d_7c57_4fd9_9ca6_66f0ba9dfcec.slice. Oct 13 05:37:06.975878 kubelet[3925]: I1013 05:37:06.975852 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rggz\" (UniqueName: \"kubernetes.io/projected/601cb2d1-beec-49e2-8f8f-894578da294a-kube-api-access-7rggz\") pod \"coredns-66bc5c9577-wvqd2\" (UID: \"601cb2d1-beec-49e2-8f8f-894578da294a\") " pod="kube-system/coredns-66bc5c9577-wvqd2" Oct 13 05:37:06.976789 kubelet[3925]: I1013 05:37:06.976513 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/601cb2d1-beec-49e2-8f8f-894578da294a-config-volume\") pod \"coredns-66bc5c9577-wvqd2\" (UID: \"601cb2d1-beec-49e2-8f8f-894578da294a\") " pod="kube-system/coredns-66bc5c9577-wvqd2" Oct 13 05:37:07.265875 kubelet[3925]: I1013 05:37:07.077482 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knpg\" (UniqueName: \"kubernetes.io/projected/744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec-kube-api-access-9knpg\") pod \"goldmane-854f97d977-kxpxd\" (UID: \"744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec\") " pod="calico-system/goldmane-854f97d977-kxpxd" Oct 13 05:37:07.265875 kubelet[3925]: I1013 05:37:07.077546 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec-goldmane-key-pair\") pod \"goldmane-854f97d977-kxpxd\" (UID: \"744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec\") " pod="calico-system/goldmane-854f97d977-kxpxd" Oct 13 05:37:07.265875 kubelet[3925]: I1013 05:37:07.077592 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec-config\") pod \"goldmane-854f97d977-kxpxd\" (UID: \"744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec\") " pod="calico-system/goldmane-854f97d977-kxpxd" Oct 13 05:37:07.265875 kubelet[3925]: I1013 05:37:07.077614 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec-goldmane-ca-bundle\") pod \"goldmane-854f97d977-kxpxd\" (UID: \"744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec\") " pod="calico-system/goldmane-854f97d977-kxpxd" Oct 13 05:37:07.426656 containerd[2478]: time="2025-10-13T05:37:07.426616453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6776779fdf-8klzf,Uid:e25fbd1e-2e73-4593-82da-729001cdfac5,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:07.434127 systemd[1]: Created slice kubepods-besteffort-pod1dae4c5f_99a8_408c_a491_33da46fb7ea0.slice - libcontainer container kubepods-besteffort-pod1dae4c5f_99a8_408c_a491_33da46fb7ea0.slice. Oct 13 05:37:07.444757 systemd[1]: Created slice kubepods-besteffort-poddd86b0ea_d7f3_411b_ad14_d1033bcd0923.slice - libcontainer container kubepods-besteffort-poddd86b0ea_d7f3_411b_ad14_d1033bcd0923.slice. Oct 13 05:37:07.452280 systemd[1]: Created slice kubepods-besteffort-pod0158f1dc_dce0_4a10_aa34_11a88f1f819b.slice - libcontainer container kubepods-besteffort-pod0158f1dc_dce0_4a10_aa34_11a88f1f819b.slice. Oct 13 05:37:07.462292 containerd[2478]: time="2025-10-13T05:37:07.462266274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqvx9,Uid:dd86b0ea-d7f3-411b-ad14-d1033bcd0923,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:07.467716 systemd[1]: Created slice kubepods-besteffort-pode1e43d81_d557_49f8_8ca0_cf087b06bd61.slice - libcontainer container kubepods-besteffort-pode1e43d81_d557_49f8_8ca0_cf087b06bd61.slice. Oct 13 05:37:07.478168 systemd[1]: Created slice kubepods-burstable-pod810b5003_533c_4288_a549_6c361afff2ef.slice - libcontainer container kubepods-burstable-pod810b5003_533c_4288_a549_6c361afff2ef.slice. Oct 13 05:37:07.482186 kubelet[3925]: I1013 05:37:07.481356 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn4sn\" (UniqueName: \"kubernetes.io/projected/1dae4c5f-99a8-408c-a491-33da46fb7ea0-kube-api-access-tn4sn\") pod \"calico-apiserver-584697f844-tfwhf\" (UID: \"1dae4c5f-99a8-408c-a491-33da46fb7ea0\") " pod="calico-apiserver/calico-apiserver-584697f844-tfwhf" Oct 13 05:37:07.482186 kubelet[3925]: I1013 05:37:07.481413 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9q9c\" (UniqueName: \"kubernetes.io/projected/810b5003-533c-4288-a549-6c361afff2ef-kube-api-access-f9q9c\") pod \"coredns-66bc5c9577-m85sq\" (UID: \"810b5003-533c-4288-a549-6c361afff2ef\") " pod="kube-system/coredns-66bc5c9577-m85sq" Oct 13 05:37:07.482186 kubelet[3925]: I1013 05:37:07.481459 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rwg\" (UniqueName: \"kubernetes.io/projected/e1e43d81-d557-49f8-8ca0-cf087b06bd61-kube-api-access-q5rwg\") pod \"calico-apiserver-584697f844-b5x8f\" (UID: \"e1e43d81-d557-49f8-8ca0-cf087b06bd61\") " pod="calico-apiserver/calico-apiserver-584697f844-b5x8f" Oct 13 05:37:07.482186 kubelet[3925]: I1013 05:37:07.481480 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1dae4c5f-99a8-408c-a491-33da46fb7ea0-calico-apiserver-certs\") pod \"calico-apiserver-584697f844-tfwhf\" (UID: \"1dae4c5f-99a8-408c-a491-33da46fb7ea0\") " pod="calico-apiserver/calico-apiserver-584697f844-tfwhf" Oct 13 05:37:07.482186 kubelet[3925]: I1013 05:37:07.481616 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9q8\" (UniqueName: \"kubernetes.io/projected/0158f1dc-dce0-4a10-aa34-11a88f1f819b-kube-api-access-2b9q8\") pod \"calico-kube-controllers-7f765c785d-rjdbj\" (UID: \"0158f1dc-dce0-4a10-aa34-11a88f1f819b\") " pod="calico-system/calico-kube-controllers-7f765c785d-rjdbj" Oct 13 05:37:07.482396 kubelet[3925]: I1013 05:37:07.481763 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/810b5003-533c-4288-a549-6c361afff2ef-config-volume\") pod \"coredns-66bc5c9577-m85sq\" (UID: \"810b5003-533c-4288-a549-6c361afff2ef\") " pod="kube-system/coredns-66bc5c9577-m85sq" Oct 13 05:37:07.482396 kubelet[3925]: I1013 05:37:07.481790 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0158f1dc-dce0-4a10-aa34-11a88f1f819b-tigera-ca-bundle\") pod \"calico-kube-controllers-7f765c785d-rjdbj\" (UID: \"0158f1dc-dce0-4a10-aa34-11a88f1f819b\") " pod="calico-system/calico-kube-controllers-7f765c785d-rjdbj" Oct 13 05:37:07.482396 kubelet[3925]: I1013 05:37:07.481811 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1e43d81-d557-49f8-8ca0-cf087b06bd61-calico-apiserver-certs\") pod \"calico-apiserver-584697f844-b5x8f\" (UID: \"e1e43d81-d557-49f8-8ca0-cf087b06bd61\") " pod="calico-apiserver/calico-apiserver-584697f844-b5x8f" Oct 13 05:37:07.537512 containerd[2478]: time="2025-10-13T05:37:07.535111848Z" level=error msg="Failed to destroy network for sandbox \"26452772899b0eb5914977f5e763afb4f9965205e66bee74db2f15e9450c6f8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.537267 systemd[1]: run-netns-cni\x2d92b92074\x2d0fae\x2d0f61\x2d010b\x2d8b19377eebdd.mount: Deactivated successfully. Oct 13 05:37:07.542237 containerd[2478]: time="2025-10-13T05:37:07.542151246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6776779fdf-8klzf,Uid:e25fbd1e-2e73-4593-82da-729001cdfac5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26452772899b0eb5914977f5e763afb4f9965205e66bee74db2f15e9450c6f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.542592 kubelet[3925]: E1013 05:37:07.542558 3925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26452772899b0eb5914977f5e763afb4f9965205e66bee74db2f15e9450c6f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.542691 kubelet[3925]: E1013 05:37:07.542621 3925 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26452772899b0eb5914977f5e763afb4f9965205e66bee74db2f15e9450c6f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6776779fdf-8klzf" Oct 13 05:37:07.542691 kubelet[3925]: E1013 05:37:07.542641 3925 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26452772899b0eb5914977f5e763afb4f9965205e66bee74db2f15e9450c6f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6776779fdf-8klzf" Oct 13 05:37:07.542772 kubelet[3925]: E1013 05:37:07.542704 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6776779fdf-8klzf_calico-system(e25fbd1e-2e73-4593-82da-729001cdfac5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6776779fdf-8klzf_calico-system(e25fbd1e-2e73-4593-82da-729001cdfac5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26452772899b0eb5914977f5e763afb4f9965205e66bee74db2f15e9450c6f8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6776779fdf-8klzf" podUID="e25fbd1e-2e73-4593-82da-729001cdfac5" Oct 13 05:37:07.551685 containerd[2478]: time="2025-10-13T05:37:07.551654111Z" level=error msg="Failed to destroy network for sandbox \"c4f9f184ce4eaed97b6ddbb4d755643eb3256d030819d79b25ec234c0439b41b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.553759 systemd[1]: run-netns-cni\x2d96e7ce29\x2df237\x2d5c6b\x2d60c8\x2ddf095cb5336e.mount: Deactivated successfully. Oct 13 05:37:07.556293 containerd[2478]: time="2025-10-13T05:37:07.556261989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqvx9,Uid:dd86b0ea-d7f3-411b-ad14-d1033bcd0923,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f9f184ce4eaed97b6ddbb4d755643eb3256d030819d79b25ec234c0439b41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.556470 kubelet[3925]: E1013 05:37:07.556445 3925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f9f184ce4eaed97b6ddbb4d755643eb3256d030819d79b25ec234c0439b41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.556522 kubelet[3925]: E1013 05:37:07.556483 3925 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f9f184ce4eaed97b6ddbb4d755643eb3256d030819d79b25ec234c0439b41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kqvx9" Oct 13 05:37:07.556522 kubelet[3925]: E1013 05:37:07.556500 3925 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f9f184ce4eaed97b6ddbb4d755643eb3256d030819d79b25ec234c0439b41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kqvx9" Oct 13 05:37:07.556578 kubelet[3925]: E1013 05:37:07.556541 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kqvx9_calico-system(dd86b0ea-d7f3-411b-ad14-d1033bcd0923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kqvx9_calico-system(dd86b0ea-d7f3-411b-ad14-d1033bcd0923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4f9f184ce4eaed97b6ddbb4d755643eb3256d030819d79b25ec234c0439b41b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kqvx9" podUID="dd86b0ea-d7f3-411b-ad14-d1033bcd0923" Oct 13 05:37:07.567573 containerd[2478]: time="2025-10-13T05:37:07.567538456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wvqd2,Uid:601cb2d1-beec-49e2-8f8f-894578da294a,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:07.577922 containerd[2478]: time="2025-10-13T05:37:07.577897734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-kxpxd,Uid:744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:07.644086 containerd[2478]: time="2025-10-13T05:37:07.644008169Z" level=error msg="Failed to destroy network for sandbox \"7310d8f075aaac21584aaec55c98d926197a29cc049cf4e6952c5646c41f7d5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.649003 containerd[2478]: time="2025-10-13T05:37:07.648958145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wvqd2,Uid:601cb2d1-beec-49e2-8f8f-894578da294a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7310d8f075aaac21584aaec55c98d926197a29cc049cf4e6952c5646c41f7d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.650741 kubelet[3925]: E1013 05:37:07.650631 3925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7310d8f075aaac21584aaec55c98d926197a29cc049cf4e6952c5646c41f7d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.650741 kubelet[3925]: E1013 05:37:07.650692 3925 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7310d8f075aaac21584aaec55c98d926197a29cc049cf4e6952c5646c41f7d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wvqd2" Oct 13 05:37:07.650741 kubelet[3925]: E1013 05:37:07.650711 3925 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7310d8f075aaac21584aaec55c98d926197a29cc049cf4e6952c5646c41f7d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wvqd2" Oct 13 05:37:07.650943 kubelet[3925]: E1013 05:37:07.650893 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wvqd2_kube-system(601cb2d1-beec-49e2-8f8f-894578da294a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wvqd2_kube-system(601cb2d1-beec-49e2-8f8f-894578da294a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7310d8f075aaac21584aaec55c98d926197a29cc049cf4e6952c5646c41f7d5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wvqd2" podUID="601cb2d1-beec-49e2-8f8f-894578da294a" Oct 13 05:37:07.655646 containerd[2478]: time="2025-10-13T05:37:07.655613791Z" level=error msg="Failed to destroy network for sandbox \"9a5c1ca2e40537ad474e2a0529cd972bb6be89d986ae0db8fce600d3f5158a1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.658536 containerd[2478]: time="2025-10-13T05:37:07.658501801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-kxpxd,Uid:744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5c1ca2e40537ad474e2a0529cd972bb6be89d986ae0db8fce600d3f5158a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.658750 kubelet[3925]: E1013 05:37:07.658724 3925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5c1ca2e40537ad474e2a0529cd972bb6be89d986ae0db8fce600d3f5158a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.658820 kubelet[3925]: E1013 05:37:07.658765 3925 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5c1ca2e40537ad474e2a0529cd972bb6be89d986ae0db8fce600d3f5158a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-kxpxd" Oct 13 05:37:07.658820 kubelet[3925]: E1013 05:37:07.658796 3925 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5c1ca2e40537ad474e2a0529cd972bb6be89d986ae0db8fce600d3f5158a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-kxpxd" Oct 13 05:37:07.658911 kubelet[3925]: E1013 05:37:07.658842 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-kxpxd_calico-system(744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-kxpxd_calico-system(744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a5c1ca2e40537ad474e2a0529cd972bb6be89d986ae0db8fce600d3f5158a1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-kxpxd" podUID="744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec" Oct 13 05:37:07.752905 containerd[2478]: time="2025-10-13T05:37:07.752881315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-584697f844-tfwhf,Uid:1dae4c5f-99a8-408c-a491-33da46fb7ea0,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:37:07.766722 containerd[2478]: time="2025-10-13T05:37:07.766682248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f765c785d-rjdbj,Uid:0158f1dc-dce0-4a10-aa34-11a88f1f819b,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:07.783848 containerd[2478]: time="2025-10-13T05:37:07.783766671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-584697f844-b5x8f,Uid:e1e43d81-d557-49f8-8ca0-cf087b06bd61,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:37:07.788350 containerd[2478]: time="2025-10-13T05:37:07.788271553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m85sq,Uid:810b5003-533c-4288-a549-6c361afff2ef,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:07.809182 containerd[2478]: time="2025-10-13T05:37:07.809153514Z" level=error msg="Failed to destroy network for sandbox \"c5f4ea7ae0f25b3de84df20190b61bede5eb9d417d04e6c8f51c9577bd79c443\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.818389 containerd[2478]: time="2025-10-13T05:37:07.818330289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-584697f844-tfwhf,Uid:1dae4c5f-99a8-408c-a491-33da46fb7ea0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f4ea7ae0f25b3de84df20190b61bede5eb9d417d04e6c8f51c9577bd79c443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.818603 kubelet[3925]: E1013 05:37:07.818516 3925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f4ea7ae0f25b3de84df20190b61bede5eb9d417d04e6c8f51c9577bd79c443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.818603 kubelet[3925]: E1013 05:37:07.818555 3925 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f4ea7ae0f25b3de84df20190b61bede5eb9d417d04e6c8f51c9577bd79c443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-584697f844-tfwhf" Oct 13 05:37:07.818603 kubelet[3925]: E1013 05:37:07.818574 3925 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f4ea7ae0f25b3de84df20190b61bede5eb9d417d04e6c8f51c9577bd79c443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-584697f844-tfwhf" Oct 13 05:37:07.818827 kubelet[3925]: E1013 05:37:07.818619 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-584697f844-tfwhf_calico-apiserver(1dae4c5f-99a8-408c-a491-33da46fb7ea0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-584697f844-tfwhf_calico-apiserver(1dae4c5f-99a8-408c-a491-33da46fb7ea0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5f4ea7ae0f25b3de84df20190b61bede5eb9d417d04e6c8f51c9577bd79c443\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-584697f844-tfwhf" podUID="1dae4c5f-99a8-408c-a491-33da46fb7ea0" Oct 13 05:37:07.863854 containerd[2478]: time="2025-10-13T05:37:07.863814401Z" level=error msg="Failed to destroy network for sandbox \"7ce637135a650ce9455d45d10b79c7233c660f3ce1149dbf210b95b22505e3b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.867321 containerd[2478]: time="2025-10-13T05:37:07.867234556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f765c785d-rjdbj,Uid:0158f1dc-dce0-4a10-aa34-11a88f1f819b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce637135a650ce9455d45d10b79c7233c660f3ce1149dbf210b95b22505e3b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.867859 kubelet[3925]: E1013 05:37:07.867758 3925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce637135a650ce9455d45d10b79c7233c660f3ce1149dbf210b95b22505e3b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.867859 kubelet[3925]: E1013 05:37:07.867801 3925 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce637135a650ce9455d45d10b79c7233c660f3ce1149dbf210b95b22505e3b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f765c785d-rjdbj" Oct 13 05:37:07.867859 kubelet[3925]: E1013 05:37:07.867823 3925 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce637135a650ce9455d45d10b79c7233c660f3ce1149dbf210b95b22505e3b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f765c785d-rjdbj" Oct 13 05:37:07.868772 kubelet[3925]: E1013 05:37:07.868743 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f765c785d-rjdbj_calico-system(0158f1dc-dce0-4a10-aa34-11a88f1f819b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f765c785d-rjdbj_calico-system(0158f1dc-dce0-4a10-aa34-11a88f1f819b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ce637135a650ce9455d45d10b79c7233c660f3ce1149dbf210b95b22505e3b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f765c785d-rjdbj" podUID="0158f1dc-dce0-4a10-aa34-11a88f1f819b" Oct 13 05:37:07.871218 containerd[2478]: time="2025-10-13T05:37:07.871189964Z" level=error msg="Failed to destroy network for sandbox \"e14980a315dd33049bd396984e9439fc624a585eab218996100460bd3f132bb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.874515 containerd[2478]: time="2025-10-13T05:37:07.874340406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-584697f844-b5x8f,Uid:e1e43d81-d557-49f8-8ca0-cf087b06bd61,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14980a315dd33049bd396984e9439fc624a585eab218996100460bd3f132bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.875756 kubelet[3925]: E1013 05:37:07.875475 3925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14980a315dd33049bd396984e9439fc624a585eab218996100460bd3f132bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.875921 kubelet[3925]: E1013 05:37:07.875865 3925 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14980a315dd33049bd396984e9439fc624a585eab218996100460bd3f132bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-584697f844-b5x8f" Oct 13 05:37:07.875921 kubelet[3925]: E1013 05:37:07.875889 3925 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14980a315dd33049bd396984e9439fc624a585eab218996100460bd3f132bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-584697f844-b5x8f" Oct 13 05:37:07.876072 kubelet[3925]: E1013 05:37:07.876031 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-584697f844-b5x8f_calico-apiserver(e1e43d81-d557-49f8-8ca0-cf087b06bd61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-584697f844-b5x8f_calico-apiserver(e1e43d81-d557-49f8-8ca0-cf087b06bd61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e14980a315dd33049bd396984e9439fc624a585eab218996100460bd3f132bb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-584697f844-b5x8f" podUID="e1e43d81-d557-49f8-8ca0-cf087b06bd61" Oct 13 05:37:07.881943 containerd[2478]: time="2025-10-13T05:37:07.881913898Z" level=error msg="Failed to destroy network for sandbox \"eeb4d92df4d2f17e06856dfd134b79b2b5b4df1e71044509cb25a044af1b252a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.886788 containerd[2478]: time="2025-10-13T05:37:07.886751498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m85sq,Uid:810b5003-533c-4288-a549-6c361afff2ef,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4d92df4d2f17e06856dfd134b79b2b5b4df1e71044509cb25a044af1b252a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.886914 kubelet[3925]: E1013 05:37:07.886888 3925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4d92df4d2f17e06856dfd134b79b2b5b4df1e71044509cb25a044af1b252a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:37:07.886969 kubelet[3925]: E1013 05:37:07.886936 3925 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4d92df4d2f17e06856dfd134b79b2b5b4df1e71044509cb25a044af1b252a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m85sq" Oct 13 05:37:07.886969 kubelet[3925]: E1013 05:37:07.886956 3925 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4d92df4d2f17e06856dfd134b79b2b5b4df1e71044509cb25a044af1b252a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m85sq" Oct 13 05:37:07.887029 kubelet[3925]: E1013 05:37:07.886996 3925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-m85sq_kube-system(810b5003-533c-4288-a549-6c361afff2ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-m85sq_kube-system(810b5003-533c-4288-a549-6c361afff2ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeb4d92df4d2f17e06856dfd134b79b2b5b4df1e71044509cb25a044af1b252a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m85sq" podUID="810b5003-533c-4288-a549-6c361afff2ef" Oct 13 05:37:08.171042 containerd[2478]: time="2025-10-13T05:37:08.170819684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:37:08.477562 kubelet[3925]: I1013 05:37:08.477167 3925 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:37:08.508846 systemd[1]: run-netns-cni\x2d4f860c36\x2d4976\x2d3ed1\x2d0f34\x2dee486c3fa8ef.mount: Deactivated successfully. Oct 13 05:37:08.508951 systemd[1]: run-netns-cni\x2d477c350a\x2dd5f3\x2d9824\x2d862b\x2db4d1ba48c885.mount: Deactivated successfully. Oct 13 05:37:13.110835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268447504.mount: Deactivated successfully. Oct 13 05:37:13.142383 containerd[2478]: time="2025-10-13T05:37:13.142320248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:13.145164 containerd[2478]: time="2025-10-13T05:37:13.145094056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:37:13.147849 containerd[2478]: time="2025-10-13T05:37:13.147824508Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:13.153174 containerd[2478]: time="2025-10-13T05:37:13.152702803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:13.153174 containerd[2478]: time="2025-10-13T05:37:13.153057036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.982198212s" Oct 13 05:37:13.153174 containerd[2478]: time="2025-10-13T05:37:13.153085611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:37:13.173892 containerd[2478]: time="2025-10-13T05:37:13.173863839Z" level=info msg="CreateContainer within sandbox \"a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:37:13.196398 containerd[2478]: time="2025-10-13T05:37:13.196276339Z" level=info msg="Container e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:13.215605 containerd[2478]: time="2025-10-13T05:37:13.215577756Z" level=info msg="CreateContainer within sandbox \"a907f184a69054f5b81b06c3e400dbfa72a6e25f49924f226ab0372c8bbb5b19\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\"" Oct 13 05:37:13.216993 containerd[2478]: time="2025-10-13T05:37:13.215955639Z" level=info msg="StartContainer for \"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\"" Oct 13 05:37:13.217604 containerd[2478]: time="2025-10-13T05:37:13.217574983Z" level=info msg="connecting to shim e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a" address="unix:///run/containerd/s/690b893dc91fbb23bfc660c446a5746a44a2795436320e7d9d2ecfeb1ed267b1" protocol=ttrpc version=3 Oct 13 05:37:13.243503 systemd[1]: Started cri-containerd-e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a.scope - libcontainer container e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a. Oct 13 05:37:13.279448 containerd[2478]: time="2025-10-13T05:37:13.279423291Z" level=info msg="StartContainer for \"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\" returns successfully" Oct 13 05:37:13.642536 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:37:13.642645 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:37:13.820304 kubelet[3925]: I1013 05:37:13.820273 3925 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e25fbd1e-2e73-4593-82da-729001cdfac5-whisker-ca-bundle\") pod \"e25fbd1e-2e73-4593-82da-729001cdfac5\" (UID: \"e25fbd1e-2e73-4593-82da-729001cdfac5\") " Oct 13 05:37:13.820891 kubelet[3925]: I1013 05:37:13.820660 3925 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snng9\" (UniqueName: \"kubernetes.io/projected/e25fbd1e-2e73-4593-82da-729001cdfac5-kube-api-access-snng9\") pod \"e25fbd1e-2e73-4593-82da-729001cdfac5\" (UID: \"e25fbd1e-2e73-4593-82da-729001cdfac5\") " Oct 13 05:37:13.820891 kubelet[3925]: I1013 05:37:13.820628 3925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e25fbd1e-2e73-4593-82da-729001cdfac5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e25fbd1e-2e73-4593-82da-729001cdfac5" (UID: "e25fbd1e-2e73-4593-82da-729001cdfac5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:37:13.820891 kubelet[3925]: I1013 05:37:13.820697 3925 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e25fbd1e-2e73-4593-82da-729001cdfac5-whisker-backend-key-pair\") pod \"e25fbd1e-2e73-4593-82da-729001cdfac5\" (UID: \"e25fbd1e-2e73-4593-82da-729001cdfac5\") " Oct 13 05:37:13.820891 kubelet[3925]: I1013 05:37:13.820777 3925 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e25fbd1e-2e73-4593-82da-729001cdfac5-whisker-ca-bundle\") on node \"ci-4487.0.0-a-dfb3332019\" DevicePath \"\"" Oct 13 05:37:13.824738 kubelet[3925]: I1013 05:37:13.824435 3925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25fbd1e-2e73-4593-82da-729001cdfac5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e25fbd1e-2e73-4593-82da-729001cdfac5" (UID: "e25fbd1e-2e73-4593-82da-729001cdfac5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:37:13.824921 kubelet[3925]: I1013 05:37:13.824878 3925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25fbd1e-2e73-4593-82da-729001cdfac5-kube-api-access-snng9" (OuterVolumeSpecName: "kube-api-access-snng9") pod "e25fbd1e-2e73-4593-82da-729001cdfac5" (UID: "e25fbd1e-2e73-4593-82da-729001cdfac5"). InnerVolumeSpecName "kube-api-access-snng9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:37:13.921792 kubelet[3925]: I1013 05:37:13.921714 3925 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-snng9\" (UniqueName: \"kubernetes.io/projected/e25fbd1e-2e73-4593-82da-729001cdfac5-kube-api-access-snng9\") on node \"ci-4487.0.0-a-dfb3332019\" DevicePath \"\"" Oct 13 05:37:13.921792 kubelet[3925]: I1013 05:37:13.921738 3925 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e25fbd1e-2e73-4593-82da-729001cdfac5-whisker-backend-key-pair\") on node \"ci-4487.0.0-a-dfb3332019\" DevicePath \"\"" Oct 13 05:37:14.082810 systemd[1]: Removed slice kubepods-besteffort-pode25fbd1e_2e73_4593_82da_729001cdfac5.slice - libcontainer container kubepods-besteffort-pode25fbd1e_2e73_4593_82da_729001cdfac5.slice. Oct 13 05:37:14.108328 systemd[1]: var-lib-kubelet-pods-e25fbd1e\x2d2e73\x2d4593\x2d82da\x2d729001cdfac5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsnng9.mount: Deactivated successfully. Oct 13 05:37:14.108451 systemd[1]: var-lib-kubelet-pods-e25fbd1e\x2d2e73\x2d4593\x2d82da\x2d729001cdfac5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:37:14.212426 kubelet[3925]: I1013 05:37:14.212192 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q7594" podStartSLOduration=2.089169628 podStartE2EDuration="19.212175277s" podCreationTimestamp="2025-10-13 05:36:55 +0000 UTC" firstStartedPulling="2025-10-13 05:36:56.030812823 +0000 UTC m=+20.069687832" lastFinishedPulling="2025-10-13 05:37:13.153818471 +0000 UTC m=+37.192693481" observedRunningTime="2025-10-13 05:37:14.211497214 +0000 UTC m=+38.250372240" watchObservedRunningTime="2025-10-13 05:37:14.212175277 +0000 UTC m=+38.251050284" Oct 13 05:37:14.273656 containerd[2478]: time="2025-10-13T05:37:14.273617613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\" id:\"bff5b18c0d67eefdc4c7c8d9e5a10659abd41b713281d3d14c9f2fb15d9c1132\" pid:5002 exit_status:1 exited_at:{seconds:1760333834 nanos:273069844}" Oct 13 05:37:14.315439 systemd[1]: Created slice kubepods-besteffort-pod00ef012a_d552_4667_96af_c4836882faad.slice - libcontainer container kubepods-besteffort-pod00ef012a_d552_4667_96af_c4836882faad.slice. Oct 13 05:37:14.425026 kubelet[3925]: I1013 05:37:14.424996 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/00ef012a-d552-4667-96af-c4836882faad-whisker-backend-key-pair\") pod \"whisker-66f487676d-9mtcf\" (UID: \"00ef012a-d552-4667-96af-c4836882faad\") " pod="calico-system/whisker-66f487676d-9mtcf" Oct 13 05:37:14.425026 kubelet[3925]: I1013 05:37:14.425035 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ef012a-d552-4667-96af-c4836882faad-whisker-ca-bundle\") pod \"whisker-66f487676d-9mtcf\" (UID: \"00ef012a-d552-4667-96af-c4836882faad\") " pod="calico-system/whisker-66f487676d-9mtcf" Oct 13 05:37:14.425178 kubelet[3925]: I1013 05:37:14.425056 3925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdsl2\" (UniqueName: \"kubernetes.io/projected/00ef012a-d552-4667-96af-c4836882faad-kube-api-access-jdsl2\") pod \"whisker-66f487676d-9mtcf\" (UID: \"00ef012a-d552-4667-96af-c4836882faad\") " pod="calico-system/whisker-66f487676d-9mtcf" Oct 13 05:37:14.625393 containerd[2478]: time="2025-10-13T05:37:14.625340171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f487676d-9mtcf,Uid:00ef012a-d552-4667-96af-c4836882faad,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:14.743576 systemd-networkd[2117]: cali58a91db868f: Link UP Oct 13 05:37:14.744493 systemd-networkd[2117]: cali58a91db868f: Gained carrier Oct 13 05:37:14.761343 containerd[2478]: 2025-10-13 05:37:14.651 [INFO][5016] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:37:14.761343 containerd[2478]: 2025-10-13 05:37:14.659 [INFO][5016] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0 whisker-66f487676d- calico-system 00ef012a-d552-4667-96af-c4836882faad 878 0 2025-10-13 05:37:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66f487676d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.0-a-dfb3332019 whisker-66f487676d-9mtcf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali58a91db868f [] [] }} ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Namespace="calico-system" Pod="whisker-66f487676d-9mtcf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-" Oct 13 05:37:14.761343 containerd[2478]: 2025-10-13 05:37:14.659 [INFO][5016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Namespace="calico-system" Pod="whisker-66f487676d-9mtcf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" Oct 13 05:37:14.761343 containerd[2478]: 2025-10-13 05:37:14.681 [INFO][5029] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" HandleID="k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Workload="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.681 [INFO][5029] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" HandleID="k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Workload="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-dfb3332019", "pod":"whisker-66f487676d-9mtcf", "timestamp":"2025-10-13 05:37:14.681639255 +0000 UTC"}, Hostname:"ci-4487.0.0-a-dfb3332019", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.681 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.681 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.681 [INFO][5029] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-dfb3332019' Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.686 [INFO][5029] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.689 [INFO][5029] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.695 [INFO][5029] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.697 [INFO][5029] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761583 containerd[2478]: 2025-10-13 05:37:14.698 [INFO][5029] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761799 containerd[2478]: 2025-10-13 05:37:14.698 [INFO][5029] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761799 containerd[2478]: 2025-10-13 05:37:14.700 [INFO][5029] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf Oct 13 05:37:14.761799 containerd[2478]: 2025-10-13 05:37:14.714 [INFO][5029] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761799 containerd[2478]: 2025-10-13 05:37:14.719 [INFO][5029] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.65/26] block=192.168.66.64/26 handle="k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761799 containerd[2478]: 2025-10-13 05:37:14.719 [INFO][5029] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.65/26] handle="k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:14.761799 containerd[2478]: 2025-10-13 05:37:14.719 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:14.761799 containerd[2478]: 2025-10-13 05:37:14.719 [INFO][5029] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.65/26] IPv6=[] ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" HandleID="k8s-pod-network.d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Workload="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" Oct 13 05:37:14.761957 containerd[2478]: 2025-10-13 05:37:14.723 [INFO][5016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Namespace="calico-system" Pod="whisker-66f487676d-9mtcf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0", GenerateName:"whisker-66f487676d-", Namespace:"calico-system", SelfLink:"", UID:"00ef012a-d552-4667-96af-c4836882faad", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 37, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f487676d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"", Pod:"whisker-66f487676d-9mtcf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali58a91db868f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:14.761957 containerd[2478]: 2025-10-13 05:37:14.723 [INFO][5016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.65/32] ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Namespace="calico-system" Pod="whisker-66f487676d-9mtcf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" Oct 13 05:37:14.762048 containerd[2478]: 2025-10-13 05:37:14.723 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58a91db868f ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Namespace="calico-system" Pod="whisker-66f487676d-9mtcf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" Oct 13 05:37:14.762048 containerd[2478]: 2025-10-13 05:37:14.744 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Namespace="calico-system" Pod="whisker-66f487676d-9mtcf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" Oct 13 05:37:14.762100 containerd[2478]: 2025-10-13 05:37:14.746 [INFO][5016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Namespace="calico-system" Pod="whisker-66f487676d-9mtcf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0", GenerateName:"whisker-66f487676d-", Namespace:"calico-system", SelfLink:"", UID:"00ef012a-d552-4667-96af-c4836882faad", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 37, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f487676d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf", Pod:"whisker-66f487676d-9mtcf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali58a91db868f", MAC:"6a:6b:5d:b3:be:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:14.762167 containerd[2478]: 2025-10-13 05:37:14.759 [INFO][5016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" Namespace="calico-system" Pod="whisker-66f487676d-9mtcf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-whisker--66f487676d--9mtcf-eth0" Oct 13 05:37:14.803341 containerd[2478]: time="2025-10-13T05:37:14.803250629Z" level=info msg="connecting to shim d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf" address="unix:///run/containerd/s/830251c82b84bdebe1d19f85d136d2343c81e3e911a433812a62cffdd6986f6a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:14.823582 systemd[1]: Started cri-containerd-d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf.scope - libcontainer container d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf. Oct 13 05:37:14.868222 containerd[2478]: time="2025-10-13T05:37:14.868196663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f487676d-9mtcf,Uid:00ef012a-d552-4667-96af-c4836882faad,Namespace:calico-system,Attempt:0,} returns sandbox id \"d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf\"" Oct 13 05:37:14.869504 containerd[2478]: time="2025-10-13T05:37:14.869480646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:37:15.288812 containerd[2478]: time="2025-10-13T05:37:15.288772532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\" id:\"dafeb503261da8d173f0493ccf4216cffa40f9533736715756e20b0fede2fca3\" pid:5201 exit_status:1 exited_at:{seconds:1760333835 nanos:287842608}" Oct 13 05:37:15.608021 systemd-networkd[2117]: vxlan.calico: Link UP Oct 13 05:37:15.608030 systemd-networkd[2117]: vxlan.calico: Gained carrier Oct 13 05:37:15.787446 systemd-networkd[2117]: cali58a91db868f: Gained IPv6LL Oct 13 05:37:16.080575 kubelet[3925]: I1013 05:37:16.080535 3925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25fbd1e-2e73-4593-82da-729001cdfac5" path="/var/lib/kubelet/pods/e25fbd1e-2e73-4593-82da-729001cdfac5/volumes" Oct 13 05:37:16.126058 containerd[2478]: time="2025-10-13T05:37:16.126016711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.129329 containerd[2478]: time="2025-10-13T05:37:16.129195534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:37:16.166239 containerd[2478]: time="2025-10-13T05:37:16.166177900Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.172295 containerd[2478]: time="2025-10-13T05:37:16.171705367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:16.172295 containerd[2478]: time="2025-10-13T05:37:16.172203011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.302690175s" Oct 13 05:37:16.172295 containerd[2478]: time="2025-10-13T05:37:16.172229049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:37:16.179217 containerd[2478]: time="2025-10-13T05:37:16.178968686Z" level=info msg="CreateContainer within sandbox \"d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:37:16.205096 containerd[2478]: time="2025-10-13T05:37:16.205073280Z" level=info msg="Container 0f4e4ff34eef90b775a291f5c1f351198389922d0b10ef1f05fd4c8e269e75e2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:16.220608 containerd[2478]: time="2025-10-13T05:37:16.220579313Z" level=info msg="CreateContainer within sandbox \"d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0f4e4ff34eef90b775a291f5c1f351198389922d0b10ef1f05fd4c8e269e75e2\"" Oct 13 05:37:16.221993 containerd[2478]: time="2025-10-13T05:37:16.220963364Z" level=info msg="StartContainer for \"0f4e4ff34eef90b775a291f5c1f351198389922d0b10ef1f05fd4c8e269e75e2\"" Oct 13 05:37:16.222137 containerd[2478]: time="2025-10-13T05:37:16.222061265Z" level=info msg="connecting to shim 0f4e4ff34eef90b775a291f5c1f351198389922d0b10ef1f05fd4c8e269e75e2" address="unix:///run/containerd/s/830251c82b84bdebe1d19f85d136d2343c81e3e911a433812a62cffdd6986f6a" protocol=ttrpc version=3 Oct 13 05:37:16.243522 systemd[1]: Started cri-containerd-0f4e4ff34eef90b775a291f5c1f351198389922d0b10ef1f05fd4c8e269e75e2.scope - libcontainer container 0f4e4ff34eef90b775a291f5c1f351198389922d0b10ef1f05fd4c8e269e75e2. Oct 13 05:37:16.294828 containerd[2478]: time="2025-10-13T05:37:16.294807121Z" level=info msg="StartContainer for \"0f4e4ff34eef90b775a291f5c1f351198389922d0b10ef1f05fd4c8e269e75e2\" returns successfully" Oct 13 05:37:16.296896 containerd[2478]: time="2025-10-13T05:37:16.296870600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:37:16.683494 systemd-networkd[2117]: vxlan.calico: Gained IPv6LL Oct 13 05:37:18.087645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321445731.mount: Deactivated successfully. Oct 13 05:37:18.145787 containerd[2478]: time="2025-10-13T05:37:18.145748426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:18.148870 containerd[2478]: time="2025-10-13T05:37:18.148775945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:37:18.151437 containerd[2478]: time="2025-10-13T05:37:18.151411503Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:18.159636 containerd[2478]: time="2025-10-13T05:37:18.159556813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:18.160206 containerd[2478]: time="2025-10-13T05:37:18.160071594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 1.863165137s" Oct 13 05:37:18.160206 containerd[2478]: time="2025-10-13T05:37:18.160100995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:37:18.170553 containerd[2478]: time="2025-10-13T05:37:18.170508900Z" level=info msg="CreateContainer within sandbox \"d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:37:18.191075 containerd[2478]: time="2025-10-13T05:37:18.190159849Z" level=info msg="Container f2e1423268dbaad0db09fcf001e7732783cf76e4207d3a520106540b199a2632: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:18.213066 containerd[2478]: time="2025-10-13T05:37:18.213039448Z" level=info msg="CreateContainer within sandbox \"d731f0a7bbd0484c4d4c90685ce580115ee937bc88466a88bbbbc14024d621cf\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f2e1423268dbaad0db09fcf001e7732783cf76e4207d3a520106540b199a2632\"" Oct 13 05:37:18.215318 containerd[2478]: time="2025-10-13T05:37:18.213686043Z" level=info msg="StartContainer for \"f2e1423268dbaad0db09fcf001e7732783cf76e4207d3a520106540b199a2632\"" Oct 13 05:37:18.215686 containerd[2478]: time="2025-10-13T05:37:18.215657775Z" level=info msg="connecting to shim f2e1423268dbaad0db09fcf001e7732783cf76e4207d3a520106540b199a2632" address="unix:///run/containerd/s/830251c82b84bdebe1d19f85d136d2343c81e3e911a433812a62cffdd6986f6a" protocol=ttrpc version=3 Oct 13 05:37:18.235529 systemd[1]: Started cri-containerd-f2e1423268dbaad0db09fcf001e7732783cf76e4207d3a520106540b199a2632.scope - libcontainer container f2e1423268dbaad0db09fcf001e7732783cf76e4207d3a520106540b199a2632. Oct 13 05:37:18.282144 containerd[2478]: time="2025-10-13T05:37:18.282120189Z" level=info msg="StartContainer for \"f2e1423268dbaad0db09fcf001e7732783cf76e4207d3a520106540b199a2632\" returns successfully" Oct 13 05:37:20.095113 containerd[2478]: time="2025-10-13T05:37:20.095076757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wvqd2,Uid:601cb2d1-beec-49e2-8f8f-894578da294a,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:20.102973 containerd[2478]: time="2025-10-13T05:37:20.102878222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-584697f844-tfwhf,Uid:1dae4c5f-99a8-408c-a491-33da46fb7ea0,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:37:20.110009 containerd[2478]: time="2025-10-13T05:37:20.109983332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m85sq,Uid:810b5003-533c-4288-a549-6c361afff2ef,Namespace:kube-system,Attempt:0,}" Oct 13 05:37:20.267444 systemd-networkd[2117]: cali65627a9b6eb: Link UP Oct 13 05:37:20.267646 systemd-networkd[2117]: cali65627a9b6eb: Gained carrier Oct 13 05:37:20.282255 kubelet[3925]: I1013 05:37:20.282170 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66f487676d-9mtcf" podStartSLOduration=2.990308509 podStartE2EDuration="6.282152106s" podCreationTimestamp="2025-10-13 05:37:14 +0000 UTC" firstStartedPulling="2025-10-13 05:37:14.869153571 +0000 UTC m=+38.908028574" lastFinishedPulling="2025-10-13 05:37:18.160997164 +0000 UTC m=+42.199872171" observedRunningTime="2025-10-13 05:37:19.217495452 +0000 UTC m=+43.256370506" watchObservedRunningTime="2025-10-13 05:37:20.282152106 +0000 UTC m=+44.321027125" Oct 13 05:37:20.284962 containerd[2478]: 2025-10-13 05:37:20.168 [INFO][5381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0 coredns-66bc5c9577- kube-system 601cb2d1-beec-49e2-8f8f-894578da294a 802 0 2025-10-13 05:36:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-a-dfb3332019 coredns-66bc5c9577-wvqd2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali65627a9b6eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Namespace="kube-system" Pod="coredns-66bc5c9577-wvqd2" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-" Oct 13 05:37:20.284962 containerd[2478]: 2025-10-13 05:37:20.169 [INFO][5381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Namespace="kube-system" Pod="coredns-66bc5c9577-wvqd2" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" Oct 13 05:37:20.284962 containerd[2478]: 2025-10-13 05:37:20.217 [INFO][5416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" HandleID="k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Workload="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.218 [INFO][5416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" HandleID="k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Workload="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-a-dfb3332019", "pod":"coredns-66bc5c9577-wvqd2", "timestamp":"2025-10-13 05:37:20.217954753 +0000 UTC"}, Hostname:"ci-4487.0.0-a-dfb3332019", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.218 [INFO][5416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.218 [INFO][5416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.218 [INFO][5416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-dfb3332019' Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.229 [INFO][5416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.233 [INFO][5416] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.237 [INFO][5416] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.238 [INFO][5416] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285142 containerd[2478]: 2025-10-13 05:37:20.241 [INFO][5416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285406 containerd[2478]: 2025-10-13 05:37:20.241 [INFO][5416] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285406 containerd[2478]: 2025-10-13 05:37:20.244 [INFO][5416] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973 Oct 13 05:37:20.285406 containerd[2478]: 2025-10-13 05:37:20.253 [INFO][5416] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285406 containerd[2478]: 2025-10-13 05:37:20.258 [INFO][5416] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.66/26] block=192.168.66.64/26 handle="k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285406 containerd[2478]: 2025-10-13 05:37:20.258 [INFO][5416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.66/26] handle="k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.285406 containerd[2478]: 2025-10-13 05:37:20.258 [INFO][5416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:20.285406 containerd[2478]: 2025-10-13 05:37:20.258 [INFO][5416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.66/26] IPv6=[] ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" HandleID="k8s-pod-network.eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Workload="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" Oct 13 05:37:20.285559 containerd[2478]: 2025-10-13 05:37:20.261 [INFO][5381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Namespace="kube-system" Pod="coredns-66bc5c9577-wvqd2" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"601cb2d1-beec-49e2-8f8f-894578da294a", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"", Pod:"coredns-66bc5c9577-wvqd2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65627a9b6eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:20.285559 containerd[2478]: 2025-10-13 05:37:20.261 [INFO][5381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.66/32] ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Namespace="kube-system" Pod="coredns-66bc5c9577-wvqd2" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" Oct 13 05:37:20.285559 containerd[2478]: 2025-10-13 05:37:20.261 [INFO][5381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65627a9b6eb ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Namespace="kube-system" Pod="coredns-66bc5c9577-wvqd2" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" Oct 13 05:37:20.285559 containerd[2478]: 2025-10-13 05:37:20.267 [INFO][5381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Namespace="kube-system" Pod="coredns-66bc5c9577-wvqd2" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" Oct 13 05:37:20.285559 containerd[2478]: 2025-10-13 05:37:20.268 [INFO][5381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Namespace="kube-system" Pod="coredns-66bc5c9577-wvqd2" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"601cb2d1-beec-49e2-8f8f-894578da294a", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973", Pod:"coredns-66bc5c9577-wvqd2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65627a9b6eb", MAC:"52:7f:36:07:e4:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:20.285742 containerd[2478]: 2025-10-13 05:37:20.283 [INFO][5381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" Namespace="kube-system" Pod="coredns-66bc5c9577-wvqd2" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--wvqd2-eth0" Oct 13 05:37:20.340863 containerd[2478]: time="2025-10-13T05:37:20.340815378Z" level=info msg="connecting to shim eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973" address="unix:///run/containerd/s/67c9e8433db5549ce13a0fd648d49dc7c607117a8dc4e36a381d96511b4c2994" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:20.369552 systemd[1]: Started cri-containerd-eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973.scope - libcontainer container eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973. Oct 13 05:37:20.386888 systemd-networkd[2117]: cali151801d432f: Link UP Oct 13 05:37:20.388073 systemd-networkd[2117]: cali151801d432f: Gained carrier Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.184 [INFO][5390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0 calico-apiserver-584697f844- calico-apiserver 1dae4c5f-99a8-408c-a491-33da46fb7ea0 805 0 2025-10-13 05:36:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:584697f844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-a-dfb3332019 calico-apiserver-584697f844-tfwhf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali151801d432f [] [] }} ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-tfwhf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.184 [INFO][5390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-tfwhf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.226 [INFO][5422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" HandleID="k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.227 [INFO][5422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" HandleID="k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-a-dfb3332019", "pod":"calico-apiserver-584697f844-tfwhf", "timestamp":"2025-10-13 05:37:20.226750892 +0000 UTC"}, Hostname:"ci-4487.0.0-a-dfb3332019", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.227 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.258 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.258 [INFO][5422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-dfb3332019' Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.328 [INFO][5422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.339 [INFO][5422] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.344 [INFO][5422] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.346 [INFO][5422] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.351 [INFO][5422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.351 [INFO][5422] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.353 [INFO][5422] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.358 [INFO][5422] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.372 [INFO][5422] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.67/26] block=192.168.66.64/26 handle="k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.372 [INFO][5422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.67/26] handle="k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.372 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:20.412437 containerd[2478]: 2025-10-13 05:37:20.372 [INFO][5422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.67/26] IPv6=[] ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" HandleID="k8s-pod-network.9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" Oct 13 05:37:20.413020 containerd[2478]: 2025-10-13 05:37:20.376 [INFO][5390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-tfwhf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0", GenerateName:"calico-apiserver-584697f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dae4c5f-99a8-408c-a491-33da46fb7ea0", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"584697f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"", Pod:"calico-apiserver-584697f844-tfwhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali151801d432f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:20.413020 containerd[2478]: 2025-10-13 05:37:20.377 [INFO][5390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.67/32] ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-tfwhf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" Oct 13 05:37:20.413020 containerd[2478]: 2025-10-13 05:37:20.377 [INFO][5390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali151801d432f ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-tfwhf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" Oct 13 05:37:20.413020 containerd[2478]: 2025-10-13 05:37:20.388 [INFO][5390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-tfwhf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" Oct 13 05:37:20.413020 containerd[2478]: 2025-10-13 05:37:20.389 [INFO][5390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-tfwhf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0", GenerateName:"calico-apiserver-584697f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dae4c5f-99a8-408c-a491-33da46fb7ea0", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"584697f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b", Pod:"calico-apiserver-584697f844-tfwhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali151801d432f", MAC:"be:30:e7:da:4b:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:20.413020 containerd[2478]: 2025-10-13 05:37:20.408 [INFO][5390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-tfwhf" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--tfwhf-eth0" Oct 13 05:37:20.442598 containerd[2478]: time="2025-10-13T05:37:20.442511652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wvqd2,Uid:601cb2d1-beec-49e2-8f8f-894578da294a,Namespace:kube-system,Attempt:0,} returns sandbox id \"eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973\"" Oct 13 05:37:20.453032 containerd[2478]: time="2025-10-13T05:37:20.452926487Z" level=info msg="CreateContainer within sandbox \"eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:37:20.462924 containerd[2478]: time="2025-10-13T05:37:20.462866325Z" level=info msg="connecting to shim 9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b" address="unix:///run/containerd/s/f2326a6d7fcee1082e89be0a8793acf0b77aef88a65502f74ef0cc36787e4096" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:20.485158 systemd-networkd[2117]: cali5c4b1e3811a: Link UP Oct 13 05:37:20.486519 systemd[1]: Started cri-containerd-9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b.scope - libcontainer container 9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b. Oct 13 05:37:20.489002 systemd-networkd[2117]: cali5c4b1e3811a: Gained carrier Oct 13 05:37:20.496347 containerd[2478]: time="2025-10-13T05:37:20.496213279Z" level=info msg="Container b6c5ccfdf4111319c519e09dda1fbaea425ef133250fdde282ba6693fd2db976: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.204 [INFO][5401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0 coredns-66bc5c9577- kube-system 810b5003-533c-4288-a549-6c361afff2ef 808 0 2025-10-13 05:36:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-a-dfb3332019 coredns-66bc5c9577-m85sq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c4b1e3811a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Namespace="kube-system" Pod="coredns-66bc5c9577-m85sq" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.206 [INFO][5401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Namespace="kube-system" Pod="coredns-66bc5c9577-m85sq" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.244 [INFO][5430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" HandleID="k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Workload="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.244 [INFO][5430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" HandleID="k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Workload="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-a-dfb3332019", "pod":"coredns-66bc5c9577-m85sq", "timestamp":"2025-10-13 05:37:20.244687817 +0000 UTC"}, Hostname:"ci-4487.0.0-a-dfb3332019", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.244 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.372 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.372 [INFO][5430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-dfb3332019' Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.429 [INFO][5430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.443 [INFO][5430] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.448 [INFO][5430] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.450 [INFO][5430] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.453 [INFO][5430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.453 [INFO][5430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.454 [INFO][5430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0 Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.464 [INFO][5430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.475 [INFO][5430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.68/26] block=192.168.66.64/26 handle="k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.475 [INFO][5430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.68/26] handle="k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.475 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:20.510729 containerd[2478]: 2025-10-13 05:37:20.475 [INFO][5430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.68/26] IPv6=[] ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" HandleID="k8s-pod-network.3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Workload="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" Oct 13 05:37:20.514231 containerd[2478]: 2025-10-13 05:37:20.477 [INFO][5401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Namespace="kube-system" Pod="coredns-66bc5c9577-m85sq" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"810b5003-533c-4288-a549-6c361afff2ef", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"", Pod:"coredns-66bc5c9577-m85sq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c4b1e3811a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:20.514231 containerd[2478]: 2025-10-13 05:37:20.477 [INFO][5401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.68/32] ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Namespace="kube-system" Pod="coredns-66bc5c9577-m85sq" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" Oct 13 05:37:20.514231 containerd[2478]: 2025-10-13 05:37:20.477 [INFO][5401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c4b1e3811a ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Namespace="kube-system" Pod="coredns-66bc5c9577-m85sq" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" Oct 13 05:37:20.514231 containerd[2478]: 2025-10-13 05:37:20.492 [INFO][5401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Namespace="kube-system" Pod="coredns-66bc5c9577-m85sq" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" Oct 13 05:37:20.514231 containerd[2478]: 2025-10-13 05:37:20.492 [INFO][5401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Namespace="kube-system" Pod="coredns-66bc5c9577-m85sq" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"810b5003-533c-4288-a549-6c361afff2ef", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0", Pod:"coredns-66bc5c9577-m85sq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c4b1e3811a", MAC:"3e:99:65:35:04:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:20.514524 containerd[2478]: 2025-10-13 05:37:20.507 [INFO][5401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" Namespace="kube-system" Pod="coredns-66bc5c9577-m85sq" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-coredns--66bc5c9577--m85sq-eth0" Oct 13 05:37:20.517599 containerd[2478]: time="2025-10-13T05:37:20.517170129Z" level=info msg="CreateContainer within sandbox \"eefb3a4ca686d6c6be63bbf448e55ced0c89f49ca934d18724033b7660046973\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6c5ccfdf4111319c519e09dda1fbaea425ef133250fdde282ba6693fd2db976\"" Oct 13 05:37:20.519138 containerd[2478]: time="2025-10-13T05:37:20.518809589Z" level=info msg="StartContainer for \"b6c5ccfdf4111319c519e09dda1fbaea425ef133250fdde282ba6693fd2db976\"" Oct 13 05:37:20.520963 containerd[2478]: time="2025-10-13T05:37:20.520901649Z" level=info msg="connecting to shim b6c5ccfdf4111319c519e09dda1fbaea425ef133250fdde282ba6693fd2db976" address="unix:///run/containerd/s/67c9e8433db5549ce13a0fd648d49dc7c607117a8dc4e36a381d96511b4c2994" protocol=ttrpc version=3 Oct 13 05:37:20.540682 systemd[1]: Started cri-containerd-b6c5ccfdf4111319c519e09dda1fbaea425ef133250fdde282ba6693fd2db976.scope - libcontainer container b6c5ccfdf4111319c519e09dda1fbaea425ef133250fdde282ba6693fd2db976. Oct 13 05:37:20.565395 containerd[2478]: time="2025-10-13T05:37:20.565003214Z" level=info msg="connecting to shim 3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0" address="unix:///run/containerd/s/bb5dfebcc25d24e553cd06933f9b3c58fccee52434a98cfa09f7b20cbfc958ba" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:20.585340 containerd[2478]: time="2025-10-13T05:37:20.585315526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-584697f844-tfwhf,Uid:1dae4c5f-99a8-408c-a491-33da46fb7ea0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b\"" Oct 13 05:37:20.596542 containerd[2478]: time="2025-10-13T05:37:20.596517531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:37:20.604700 containerd[2478]: time="2025-10-13T05:37:20.604583837Z" level=info msg="StartContainer for \"b6c5ccfdf4111319c519e09dda1fbaea425ef133250fdde282ba6693fd2db976\" returns successfully" Oct 13 05:37:20.604650 systemd[1]: Started cri-containerd-3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0.scope - libcontainer container 3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0. Oct 13 05:37:20.662468 containerd[2478]: time="2025-10-13T05:37:20.661869412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m85sq,Uid:810b5003-533c-4288-a549-6c361afff2ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0\"" Oct 13 05:37:20.670837 containerd[2478]: time="2025-10-13T05:37:20.670813797Z" level=info msg="CreateContainer within sandbox \"3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:37:20.690393 containerd[2478]: time="2025-10-13T05:37:20.689823335Z" level=info msg="Container 7563b3829ea62e56a34d2e4d28a6e2a48691e8d0061903014529546593d5f14e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:20.702216 containerd[2478]: time="2025-10-13T05:37:20.702197278Z" level=info msg="CreateContainer within sandbox \"3a8cd9e4faae007ba322f0a1bd5325e2d0fa306131ac45126b368ce8eb484bb0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7563b3829ea62e56a34d2e4d28a6e2a48691e8d0061903014529546593d5f14e\"" Oct 13 05:37:20.702847 containerd[2478]: time="2025-10-13T05:37:20.702824793Z" level=info msg="StartContainer for \"7563b3829ea62e56a34d2e4d28a6e2a48691e8d0061903014529546593d5f14e\"" Oct 13 05:37:20.703939 containerd[2478]: time="2025-10-13T05:37:20.703902579Z" level=info msg="connecting to shim 7563b3829ea62e56a34d2e4d28a6e2a48691e8d0061903014529546593d5f14e" address="unix:///run/containerd/s/bb5dfebcc25d24e553cd06933f9b3c58fccee52434a98cfa09f7b20cbfc958ba" protocol=ttrpc version=3 Oct 13 05:37:20.721513 systemd[1]: Started cri-containerd-7563b3829ea62e56a34d2e4d28a6e2a48691e8d0061903014529546593d5f14e.scope - libcontainer container 7563b3829ea62e56a34d2e4d28a6e2a48691e8d0061903014529546593d5f14e. Oct 13 05:37:20.748179 containerd[2478]: time="2025-10-13T05:37:20.748155797Z" level=info msg="StartContainer for \"7563b3829ea62e56a34d2e4d28a6e2a48691e8d0061903014529546593d5f14e\" returns successfully" Oct 13 05:37:21.082168 containerd[2478]: time="2025-10-13T05:37:21.082064432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f765c785d-rjdbj,Uid:0158f1dc-dce0-4a10-aa34-11a88f1f819b,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:21.087547 containerd[2478]: time="2025-10-13T05:37:21.087519928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqvx9,Uid:dd86b0ea-d7f3-411b-ad14-d1033bcd0923,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:21.092054 containerd[2478]: time="2025-10-13T05:37:21.091826006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-584697f844-b5x8f,Uid:e1e43d81-d557-49f8-8ca0-cf087b06bd61,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:37:21.242141 systemd-networkd[2117]: cali2c2616be23f: Link UP Oct 13 05:37:21.242329 systemd-networkd[2117]: cali2c2616be23f: Gained carrier Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.153 [INFO][5670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0 csi-node-driver- calico-system dd86b0ea-d7f3-411b-ad14-d1033bcd0923 694 0 2025-10-13 05:36:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.0-a-dfb3332019 csi-node-driver-kqvx9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2c2616be23f [] [] }} ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Namespace="calico-system" Pod="csi-node-driver-kqvx9" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.153 [INFO][5670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Namespace="calico-system" Pod="csi-node-driver-kqvx9" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.191 [INFO][5706] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" HandleID="k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Workload="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.191 [INFO][5706] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" HandleID="k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Workload="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-dfb3332019", "pod":"csi-node-driver-kqvx9", "timestamp":"2025-10-13 05:37:21.191304007 +0000 UTC"}, Hostname:"ci-4487.0.0-a-dfb3332019", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.191 [INFO][5706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.191 [INFO][5706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.191 [INFO][5706] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-dfb3332019' Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.198 [INFO][5706] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.203 [INFO][5706] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.207 [INFO][5706] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.209 [INFO][5706] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.212 [INFO][5706] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.212 [INFO][5706] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.213 [INFO][5706] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.221 [INFO][5706] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.234 [INFO][5706] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.69/26] block=192.168.66.64/26 handle="k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.234 [INFO][5706] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.69/26] handle="k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.235 [INFO][5706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:21.285584 containerd[2478]: 2025-10-13 05:37:21.235 [INFO][5706] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.69/26] IPv6=[] ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" HandleID="k8s-pod-network.d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Workload="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" Oct 13 05:37:21.286337 containerd[2478]: 2025-10-13 05:37:21.237 [INFO][5670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Namespace="calico-system" Pod="csi-node-driver-kqvx9" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd86b0ea-d7f3-411b-ad14-d1033bcd0923", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"", Pod:"csi-node-driver-kqvx9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c2616be23f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:21.286337 containerd[2478]: 2025-10-13 05:37:21.237 [INFO][5670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.69/32] ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Namespace="calico-system" Pod="csi-node-driver-kqvx9" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" Oct 13 05:37:21.286337 containerd[2478]: 2025-10-13 05:37:21.237 [INFO][5670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c2616be23f ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Namespace="calico-system" Pod="csi-node-driver-kqvx9" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" Oct 13 05:37:21.286337 containerd[2478]: 2025-10-13 05:37:21.241 [INFO][5670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Namespace="calico-system" Pod="csi-node-driver-kqvx9" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" Oct 13 05:37:21.286337 containerd[2478]: 2025-10-13 05:37:21.245 [INFO][5670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Namespace="calico-system" Pod="csi-node-driver-kqvx9" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd86b0ea-d7f3-411b-ad14-d1033bcd0923", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c", Pod:"csi-node-driver-kqvx9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c2616be23f", MAC:"2a:6e:f9:5b:b1:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:21.286337 containerd[2478]: 2025-10-13 05:37:21.283 [INFO][5670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" Namespace="calico-system" Pod="csi-node-driver-kqvx9" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-csi--node--driver--kqvx9-eth0" Oct 13 05:37:21.288062 kubelet[3925]: I1013 05:37:21.287616 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m85sq" podStartSLOduration=38.28759869 podStartE2EDuration="38.28759869s" podCreationTimestamp="2025-10-13 05:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:37:21.250543281 +0000 UTC m=+45.289418318" watchObservedRunningTime="2025-10-13 05:37:21.28759869 +0000 UTC m=+45.326473694" Oct 13 05:37:21.357607 containerd[2478]: time="2025-10-13T05:37:21.357502265Z" level=info msg="connecting to shim d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c" address="unix:///run/containerd/s/52329049e43a751f79fc13a107e54ee714f891e0323aea06f8c5409b17a13047" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:21.387817 systemd-networkd[2117]: calia39ef496793: Link UP Oct 13 05:37:21.387942 systemd-networkd[2117]: calia39ef496793: Gained carrier Oct 13 05:37:21.398592 systemd[1]: Started cri-containerd-d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c.scope - libcontainer container d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c. Oct 13 05:37:21.409960 kubelet[3925]: I1013 05:37:21.409851 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wvqd2" podStartSLOduration=38.409836929 podStartE2EDuration="38.409836929s" podCreationTimestamp="2025-10-13 05:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:37:21.311199412 +0000 UTC m=+45.350074455" watchObservedRunningTime="2025-10-13 05:37:21.409836929 +0000 UTC m=+45.448711936" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.171 [INFO][5674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0 calico-kube-controllers-7f765c785d- calico-system 0158f1dc-dce0-4a10-aa34-11a88f1f819b 807 0 2025-10-13 05:36:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f765c785d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.0-a-dfb3332019 calico-kube-controllers-7f765c785d-rjdbj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia39ef496793 [] [] }} ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Namespace="calico-system" Pod="calico-kube-controllers-7f765c785d-rjdbj" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.171 [INFO][5674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Namespace="calico-system" Pod="calico-kube-controllers-7f765c785d-rjdbj" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.271 [INFO][5715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" HandleID="k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.271 [INFO][5715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" HandleID="k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-dfb3332019", "pod":"calico-kube-controllers-7f765c785d-rjdbj", "timestamp":"2025-10-13 05:37:21.271583213 +0000 UTC"}, Hostname:"ci-4487.0.0-a-dfb3332019", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.272 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.272 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.272 [INFO][5715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-dfb3332019' Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.302 [INFO][5715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.322 [INFO][5715] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.329 [INFO][5715] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.337 [INFO][5715] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.344 [INFO][5715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.344 [INFO][5715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.346 [INFO][5715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84 Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.354 [INFO][5715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.375 [INFO][5715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.70/26] block=192.168.66.64/26 handle="k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.375 [INFO][5715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.70/26] handle="k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.375 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:21.414382 containerd[2478]: 2025-10-13 05:37:21.376 [INFO][5715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.70/26] IPv6=[] ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" HandleID="k8s-pod-network.279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" Oct 13 05:37:21.414952 containerd[2478]: 2025-10-13 05:37:21.384 [INFO][5674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Namespace="calico-system" Pod="calico-kube-controllers-7f765c785d-rjdbj" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0", GenerateName:"calico-kube-controllers-7f765c785d-", Namespace:"calico-system", SelfLink:"", UID:"0158f1dc-dce0-4a10-aa34-11a88f1f819b", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f765c785d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"", Pod:"calico-kube-controllers-7f765c785d-rjdbj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia39ef496793", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:21.414952 containerd[2478]: 2025-10-13 05:37:21.384 [INFO][5674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.70/32] ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Namespace="calico-system" Pod="calico-kube-controllers-7f765c785d-rjdbj" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" Oct 13 05:37:21.414952 containerd[2478]: 2025-10-13 05:37:21.384 [INFO][5674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia39ef496793 ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Namespace="calico-system" Pod="calico-kube-controllers-7f765c785d-rjdbj" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" Oct 13 05:37:21.414952 containerd[2478]: 2025-10-13 05:37:21.386 [INFO][5674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Namespace="calico-system" Pod="calico-kube-controllers-7f765c785d-rjdbj" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" Oct 13 05:37:21.414952 containerd[2478]: 2025-10-13 05:37:21.388 [INFO][5674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Namespace="calico-system" Pod="calico-kube-controllers-7f765c785d-rjdbj" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0", GenerateName:"calico-kube-controllers-7f765c785d-", Namespace:"calico-system", SelfLink:"", UID:"0158f1dc-dce0-4a10-aa34-11a88f1f819b", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f765c785d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84", Pod:"calico-kube-controllers-7f765c785d-rjdbj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia39ef496793", MAC:"f6:b0:67:6d:c8:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:21.414952 containerd[2478]: 2025-10-13 05:37:21.409 [INFO][5674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" Namespace="calico-system" Pod="calico-kube-controllers-7f765c785d-rjdbj" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--kube--controllers--7f765c785d--rjdbj-eth0" Oct 13 05:37:21.460466 containerd[2478]: time="2025-10-13T05:37:21.460439518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqvx9,Uid:dd86b0ea-d7f3-411b-ad14-d1033bcd0923,Namespace:calico-system,Attempt:0,} returns sandbox id \"d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c\"" Oct 13 05:37:21.488933 systemd-networkd[2117]: cali252bbac0806: Link UP Oct 13 05:37:21.492218 systemd-networkd[2117]: cali252bbac0806: Gained carrier Oct 13 05:37:21.501903 containerd[2478]: time="2025-10-13T05:37:21.501868257Z" level=info msg="connecting to shim 279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84" address="unix:///run/containerd/s/b499b1ce8b6f8d486db58a81812c888b7e7934355f99dc392abd61263863ac5a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.195 [INFO][5692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0 calico-apiserver-584697f844- calico-apiserver e1e43d81-d557-49f8-8ca0-cf087b06bd61 806 0 2025-10-13 05:36:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:584697f844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-a-dfb3332019 calico-apiserver-584697f844-b5x8f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali252bbac0806 [] [] }} ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-b5x8f" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.195 [INFO][5692] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-b5x8f" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.275 [INFO][5722] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" HandleID="k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.275 [INFO][5722] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" HandleID="k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-a-dfb3332019", "pod":"calico-apiserver-584697f844-b5x8f", "timestamp":"2025-10-13 05:37:21.275068396 +0000 UTC"}, Hostname:"ci-4487.0.0-a-dfb3332019", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.275 [INFO][5722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.376 [INFO][5722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.376 [INFO][5722] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-dfb3332019' Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.402 [INFO][5722] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.422 [INFO][5722] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.432 [INFO][5722] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.434 [INFO][5722] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.436 [INFO][5722] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.436 [INFO][5722] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.438 [INFO][5722] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.446 [INFO][5722] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.469 [INFO][5722] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.71/26] block=192.168.66.64/26 handle="k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.469 [INFO][5722] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.71/26] handle="k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.469 [INFO][5722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:21.528409 containerd[2478]: 2025-10-13 05:37:21.469 [INFO][5722] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.71/26] IPv6=[] ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" HandleID="k8s-pod-network.19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Workload="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" Oct 13 05:37:21.528930 containerd[2478]: 2025-10-13 05:37:21.484 [INFO][5692] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-b5x8f" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0", GenerateName:"calico-apiserver-584697f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1e43d81-d557-49f8-8ca0-cf087b06bd61", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"584697f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"", Pod:"calico-apiserver-584697f844-b5x8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali252bbac0806", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:21.528930 containerd[2478]: 2025-10-13 05:37:21.485 [INFO][5692] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.71/32] ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-b5x8f" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" Oct 13 05:37:21.528930 containerd[2478]: 2025-10-13 05:37:21.485 [INFO][5692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali252bbac0806 ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-b5x8f" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" Oct 13 05:37:21.528930 containerd[2478]: 2025-10-13 05:37:21.490 [INFO][5692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-b5x8f" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" Oct 13 05:37:21.528930 containerd[2478]: 2025-10-13 05:37:21.492 [INFO][5692] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-b5x8f" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0", GenerateName:"calico-apiserver-584697f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1e43d81-d557-49f8-8ca0-cf087b06bd61", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"584697f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a", Pod:"calico-apiserver-584697f844-b5x8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali252bbac0806", MAC:"36:1d:9f:9a:c7:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:21.528930 containerd[2478]: 2025-10-13 05:37:21.524 [INFO][5692] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" Namespace="calico-apiserver" Pod="calico-apiserver-584697f844-b5x8f" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-calico--apiserver--584697f844--b5x8f-eth0" Oct 13 05:37:21.559521 systemd[1]: Started cri-containerd-279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84.scope - libcontainer container 279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84. Oct 13 05:37:21.592390 containerd[2478]: time="2025-10-13T05:37:21.590325390Z" level=info msg="connecting to shim 19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a" address="unix:///run/containerd/s/fc76f2865b4fdf4b70eef2e07d3cb2b6ddf0724578fe3f781a787ead88883813" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:21.624813 systemd[1]: Started cri-containerd-19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a.scope - libcontainer container 19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a. Oct 13 05:37:21.675705 systemd-networkd[2117]: cali5c4b1e3811a: Gained IPv6LL Oct 13 05:37:21.687496 containerd[2478]: time="2025-10-13T05:37:21.687463857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f765c785d-rjdbj,Uid:0158f1dc-dce0-4a10-aa34-11a88f1f819b,Namespace:calico-system,Attempt:0,} returns sandbox id \"279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84\"" Oct 13 05:37:21.693391 containerd[2478]: time="2025-10-13T05:37:21.693343647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-584697f844-b5x8f,Uid:e1e43d81-d557-49f8-8ca0-cf087b06bd61,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a\"" Oct 13 05:37:21.995609 systemd-networkd[2117]: cali65627a9b6eb: Gained IPv6LL Oct 13 05:37:22.161018 containerd[2478]: time="2025-10-13T05:37:22.160980548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-kxpxd,Uid:744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec,Namespace:calico-system,Attempt:0,}" Oct 13 05:37:22.284663 systemd-networkd[2117]: calia256d8274c8: Link UP Oct 13 05:37:22.287465 systemd-networkd[2117]: calia256d8274c8: Gained carrier Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.204 [INFO][5893] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0 goldmane-854f97d977- calico-system 744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec 804 0 2025-10-13 05:36:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.0-a-dfb3332019 goldmane-854f97d977-kxpxd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia256d8274c8 [] [] }} ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Namespace="calico-system" Pod="goldmane-854f97d977-kxpxd" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.204 [INFO][5893] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Namespace="calico-system" Pod="goldmane-854f97d977-kxpxd" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.232 [INFO][5908] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" HandleID="k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Workload="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.232 [INFO][5908] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" HandleID="k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Workload="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-dfb3332019", "pod":"goldmane-854f97d977-kxpxd", "timestamp":"2025-10-13 05:37:22.232731507 +0000 UTC"}, Hostname:"ci-4487.0.0-a-dfb3332019", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.233 [INFO][5908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.233 [INFO][5908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.233 [INFO][5908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-dfb3332019' Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.240 [INFO][5908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.244 [INFO][5908] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.251 [INFO][5908] ipam/ipam.go 511: Trying affinity for 192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.252 [INFO][5908] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.255 [INFO][5908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.255 [INFO][5908] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.256 [INFO][5908] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6 Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.264 [INFO][5908] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.277 [INFO][5908] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.72/26] block=192.168.66.64/26 handle="k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.277 [INFO][5908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.72/26] handle="k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" host="ci-4487.0.0-a-dfb3332019" Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.277 [INFO][5908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:37:22.309442 containerd[2478]: 2025-10-13 05:37:22.277 [INFO][5908] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.72/26] IPv6=[] ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" HandleID="k8s-pod-network.d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Workload="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" Oct 13 05:37:22.310195 containerd[2478]: 2025-10-13 05:37:22.280 [INFO][5893] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Namespace="calico-system" Pod="goldmane-854f97d977-kxpxd" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"", Pod:"goldmane-854f97d977-kxpxd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia256d8274c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:22.310195 containerd[2478]: 2025-10-13 05:37:22.280 [INFO][5893] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.72/32] ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Namespace="calico-system" Pod="goldmane-854f97d977-kxpxd" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" Oct 13 05:37:22.310195 containerd[2478]: 2025-10-13 05:37:22.280 [INFO][5893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia256d8274c8 ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Namespace="calico-system" Pod="goldmane-854f97d977-kxpxd" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" Oct 13 05:37:22.310195 containerd[2478]: 2025-10-13 05:37:22.286 [INFO][5893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Namespace="calico-system" Pod="goldmane-854f97d977-kxpxd" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" Oct 13 05:37:22.310195 containerd[2478]: 2025-10-13 05:37:22.287 [INFO][5893] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Namespace="calico-system" Pod="goldmane-854f97d977-kxpxd" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-dfb3332019", ContainerID:"d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6", Pod:"goldmane-854f97d977-kxpxd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia256d8274c8", MAC:"ba:0c:82:96:15:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:37:22.310195 containerd[2478]: 2025-10-13 05:37:22.306 [INFO][5893] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" Namespace="calico-system" Pod="goldmane-854f97d977-kxpxd" WorkloadEndpoint="ci--4487.0.0--a--dfb3332019-k8s-goldmane--854f97d977--kxpxd-eth0" Oct 13 05:37:22.315449 systemd-networkd[2117]: cali151801d432f: Gained IPv6LL Oct 13 05:37:22.375784 containerd[2478]: time="2025-10-13T05:37:22.375714378Z" level=info msg="connecting to shim d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6" address="unix:///run/containerd/s/b1ac60a179c8486e7c90fb8509f96fcc98b7ff0ab0727128759b80eb8225be64" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:37:22.411572 systemd[1]: Started cri-containerd-d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6.scope - libcontainer container d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6. Oct 13 05:37:22.500453 containerd[2478]: time="2025-10-13T05:37:22.500425028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-kxpxd,Uid:744eaf3d-7c57-4fd9-9ca6-66f0ba9dfcec,Namespace:calico-system,Attempt:0,} returns sandbox id \"d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6\"" Oct 13 05:37:22.955487 systemd-networkd[2117]: cali252bbac0806: Gained IPv6LL Oct 13 05:37:22.972529 containerd[2478]: time="2025-10-13T05:37:22.972490174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:22.976877 containerd[2478]: time="2025-10-13T05:37:22.976783696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:37:22.989734 containerd[2478]: time="2025-10-13T05:37:22.989575797Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:22.999133 containerd[2478]: time="2025-10-13T05:37:22.998768113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.402144901s" Oct 13 05:37:22.999133 containerd[2478]: time="2025-10-13T05:37:22.998806116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:37:22.999133 containerd[2478]: time="2025-10-13T05:37:22.998949791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:23.001707 containerd[2478]: time="2025-10-13T05:37:23.001503328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:37:23.008746 containerd[2478]: time="2025-10-13T05:37:23.008720888Z" level=info msg="CreateContainer within sandbox \"9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:37:23.040476 containerd[2478]: time="2025-10-13T05:37:23.039997694Z" level=info msg="Container ef887611959aeecc04494b7e04df1ad8b4f052c50af5dedc57fa290488ef707a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:23.061916 containerd[2478]: time="2025-10-13T05:37:23.061889677Z" level=info msg="CreateContainer within sandbox \"9bfc7674b5296789197584b08008dc20543a03372eefdeae68cc48688634907b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef887611959aeecc04494b7e04df1ad8b4f052c50af5dedc57fa290488ef707a\"" Oct 13 05:37:23.062455 containerd[2478]: time="2025-10-13T05:37:23.062432060Z" level=info msg="StartContainer for \"ef887611959aeecc04494b7e04df1ad8b4f052c50af5dedc57fa290488ef707a\"" Oct 13 05:37:23.063678 containerd[2478]: time="2025-10-13T05:37:23.063633788Z" level=info msg="connecting to shim ef887611959aeecc04494b7e04df1ad8b4f052c50af5dedc57fa290488ef707a" address="unix:///run/containerd/s/f2326a6d7fcee1082e89be0a8793acf0b77aef88a65502f74ef0cc36787e4096" protocol=ttrpc version=3 Oct 13 05:37:23.082523 systemd[1]: Started cri-containerd-ef887611959aeecc04494b7e04df1ad8b4f052c50af5dedc57fa290488ef707a.scope - libcontainer container ef887611959aeecc04494b7e04df1ad8b4f052c50af5dedc57fa290488ef707a. Oct 13 05:37:23.137178 containerd[2478]: time="2025-10-13T05:37:23.137109291Z" level=info msg="StartContainer for \"ef887611959aeecc04494b7e04df1ad8b4f052c50af5dedc57fa290488ef707a\" returns successfully" Oct 13 05:37:23.275534 systemd-networkd[2117]: cali2c2616be23f: Gained IPv6LL Oct 13 05:37:23.403501 systemd-networkd[2117]: calia39ef496793: Gained IPv6LL Oct 13 05:37:23.659510 systemd-networkd[2117]: calia256d8274c8: Gained IPv6LL Oct 13 05:37:24.245047 kubelet[3925]: I1013 05:37:24.245013 3925 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:37:24.391752 containerd[2478]: time="2025-10-13T05:37:24.391712730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:24.394766 containerd[2478]: time="2025-10-13T05:37:24.394663084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:37:24.397471 containerd[2478]: time="2025-10-13T05:37:24.397445299Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:24.401395 containerd[2478]: time="2025-10-13T05:37:24.401317394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:24.401703 containerd[2478]: time="2025-10-13T05:37:24.401680446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.399879508s" Oct 13 05:37:24.401742 containerd[2478]: time="2025-10-13T05:37:24.401710403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:37:24.403541 containerd[2478]: time="2025-10-13T05:37:24.403499656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:37:24.412653 containerd[2478]: time="2025-10-13T05:37:24.412629813Z" level=info msg="CreateContainer within sandbox \"d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:37:24.434672 containerd[2478]: time="2025-10-13T05:37:24.431922916Z" level=info msg="Container a496bae535cdb37fb4fbfb2603d7dc01c270f79b92772d9144d3108cf4b12c9a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:24.448686 containerd[2478]: time="2025-10-13T05:37:24.448657809Z" level=info msg="CreateContainer within sandbox \"d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a496bae535cdb37fb4fbfb2603d7dc01c270f79b92772d9144d3108cf4b12c9a\"" Oct 13 05:37:24.449628 containerd[2478]: time="2025-10-13T05:37:24.449606640Z" level=info msg="StartContainer for \"a496bae535cdb37fb4fbfb2603d7dc01c270f79b92772d9144d3108cf4b12c9a\"" Oct 13 05:37:24.450612 containerd[2478]: time="2025-10-13T05:37:24.450591183Z" level=info msg="connecting to shim a496bae535cdb37fb4fbfb2603d7dc01c270f79b92772d9144d3108cf4b12c9a" address="unix:///run/containerd/s/52329049e43a751f79fc13a107e54ee714f891e0323aea06f8c5409b17a13047" protocol=ttrpc version=3 Oct 13 05:37:24.472520 systemd[1]: Started cri-containerd-a496bae535cdb37fb4fbfb2603d7dc01c270f79b92772d9144d3108cf4b12c9a.scope - libcontainer container a496bae535cdb37fb4fbfb2603d7dc01c270f79b92772d9144d3108cf4b12c9a. Oct 13 05:37:24.507678 containerd[2478]: time="2025-10-13T05:37:24.507611025Z" level=info msg="StartContainer for \"a496bae535cdb37fb4fbfb2603d7dc01c270f79b92772d9144d3108cf4b12c9a\" returns successfully" Oct 13 05:37:27.019870 containerd[2478]: time="2025-10-13T05:37:27.019823030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:27.022410 containerd[2478]: time="2025-10-13T05:37:27.022377294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:37:27.063681 containerd[2478]: time="2025-10-13T05:37:27.063630216Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:27.069076 containerd[2478]: time="2025-10-13T05:37:27.069031063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:27.069660 containerd[2478]: time="2025-10-13T05:37:27.069540930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.666012316s" Oct 13 05:37:27.069660 containerd[2478]: time="2025-10-13T05:37:27.069573374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:37:27.070688 containerd[2478]: time="2025-10-13T05:37:27.070665365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:37:27.089141 containerd[2478]: time="2025-10-13T05:37:27.089110884Z" level=info msg="CreateContainer within sandbox \"279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:37:27.104897 containerd[2478]: time="2025-10-13T05:37:27.104816349Z" level=info msg="Container 6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:27.122440 containerd[2478]: time="2025-10-13T05:37:27.122415273Z" level=info msg="CreateContainer within sandbox \"279cdc2a4775f041a69b11966c3f082936fff8d04259d0f9ef77066dc0618e84\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\"" Oct 13 05:37:27.122879 containerd[2478]: time="2025-10-13T05:37:27.122856594Z" level=info msg="StartContainer for \"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\"" Oct 13 05:37:27.123712 containerd[2478]: time="2025-10-13T05:37:27.123683589Z" level=info msg="connecting to shim 6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253" address="unix:///run/containerd/s/b499b1ce8b6f8d486db58a81812c888b7e7934355f99dc392abd61263863ac5a" protocol=ttrpc version=3 Oct 13 05:37:27.142686 systemd[1]: Started cri-containerd-6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253.scope - libcontainer container 6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253. Oct 13 05:37:27.189507 containerd[2478]: time="2025-10-13T05:37:27.189479453Z" level=info msg="StartContainer for \"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\" returns successfully" Oct 13 05:37:27.281881 kubelet[3925]: I1013 05:37:27.281670 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-584697f844-tfwhf" podStartSLOduration=32.875748246 podStartE2EDuration="35.281608727s" podCreationTimestamp="2025-10-13 05:36:52 +0000 UTC" firstStartedPulling="2025-10-13 05:37:20.595126653 +0000 UTC m=+44.634001666" lastFinishedPulling="2025-10-13 05:37:23.000987123 +0000 UTC m=+47.039862147" observedRunningTime="2025-10-13 05:37:23.25856505 +0000 UTC m=+47.297440072" watchObservedRunningTime="2025-10-13 05:37:27.281608727 +0000 UTC m=+51.320483763" Oct 13 05:37:27.284044 kubelet[3925]: I1013 05:37:27.283551 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f765c785d-rjdbj" podStartSLOduration=26.901620782 podStartE2EDuration="32.283538203s" podCreationTimestamp="2025-10-13 05:36:55 +0000 UTC" firstStartedPulling="2025-10-13 05:37:21.688619908 +0000 UTC m=+45.727494928" lastFinishedPulling="2025-10-13 05:37:27.070537342 +0000 UTC m=+51.109412349" observedRunningTime="2025-10-13 05:37:27.281014984 +0000 UTC m=+51.319890019" watchObservedRunningTime="2025-10-13 05:37:27.283538203 +0000 UTC m=+51.322413243" Oct 13 05:37:27.306891 containerd[2478]: time="2025-10-13T05:37:27.306866861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\" id:\"2fbf26d737e15e8f1199a4e44869f530363c0873443624aa6a5040f99e79e0fc\" pid:6115 exited_at:{seconds:1760333847 nanos:306602709}" Oct 13 05:37:27.483589 containerd[2478]: time="2025-10-13T05:37:27.483551710Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:27.486393 containerd[2478]: time="2025-10-13T05:37:27.486266214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:37:27.488795 containerd[2478]: time="2025-10-13T05:37:27.488711212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 418.013904ms" Oct 13 05:37:27.488795 containerd[2478]: time="2025-10-13T05:37:27.488761307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:37:27.490540 containerd[2478]: time="2025-10-13T05:37:27.490510323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:37:27.496101 containerd[2478]: time="2025-10-13T05:37:27.496072939Z" level=info msg="CreateContainer within sandbox \"19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:37:27.512775 containerd[2478]: time="2025-10-13T05:37:27.512751190Z" level=info msg="Container 09752b9974f5d7099a0e23e8f489a757331f55ff2ee5ae9e5f45f2d30bba401c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:27.536794 containerd[2478]: time="2025-10-13T05:37:27.536691656Z" level=info msg="CreateContainer within sandbox \"19dad5c37580a41e5365bd3a3a3f9985db231e50fb2e115c87c3f79ae15eef7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09752b9974f5d7099a0e23e8f489a757331f55ff2ee5ae9e5f45f2d30bba401c\"" Oct 13 05:37:27.537721 containerd[2478]: time="2025-10-13T05:37:27.537692316Z" level=info msg="StartContainer for \"09752b9974f5d7099a0e23e8f489a757331f55ff2ee5ae9e5f45f2d30bba401c\"" Oct 13 05:37:27.539269 containerd[2478]: time="2025-10-13T05:37:27.539241008Z" level=info msg="connecting to shim 09752b9974f5d7099a0e23e8f489a757331f55ff2ee5ae9e5f45f2d30bba401c" address="unix:///run/containerd/s/fc76f2865b4fdf4b70eef2e07d3cb2b6ddf0724578fe3f781a787ead88883813" protocol=ttrpc version=3 Oct 13 05:37:27.563533 systemd[1]: Started cri-containerd-09752b9974f5d7099a0e23e8f489a757331f55ff2ee5ae9e5f45f2d30bba401c.scope - libcontainer container 09752b9974f5d7099a0e23e8f489a757331f55ff2ee5ae9e5f45f2d30bba401c. Oct 13 05:37:27.607687 containerd[2478]: time="2025-10-13T05:37:27.607662663Z" level=info msg="StartContainer for \"09752b9974f5d7099a0e23e8f489a757331f55ff2ee5ae9e5f45f2d30bba401c\" returns successfully" Oct 13 05:37:28.283034 kubelet[3925]: I1013 05:37:28.282968 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-584697f844-b5x8f" podStartSLOduration=30.4877 podStartE2EDuration="36.282950964s" podCreationTimestamp="2025-10-13 05:36:52 +0000 UTC" firstStartedPulling="2025-10-13 05:37:21.69451511 +0000 UTC m=+45.733390126" lastFinishedPulling="2025-10-13 05:37:27.489766064 +0000 UTC m=+51.528641090" observedRunningTime="2025-10-13 05:37:28.281707054 +0000 UTC m=+52.320582071" watchObservedRunningTime="2025-10-13 05:37:28.282950964 +0000 UTC m=+52.321826074" Oct 13 05:37:30.257639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632568089.mount: Deactivated successfully. Oct 13 05:37:31.091603 containerd[2478]: time="2025-10-13T05:37:31.091557692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:31.094615 containerd[2478]: time="2025-10-13T05:37:31.094481615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:37:31.098065 containerd[2478]: time="2025-10-13T05:37:31.098037441Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:31.102692 containerd[2478]: time="2025-10-13T05:37:31.102638818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:31.103581 containerd[2478]: time="2025-10-13T05:37:31.103516892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.612886597s" Oct 13 05:37:31.103581 containerd[2478]: time="2025-10-13T05:37:31.103546933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:37:31.105562 containerd[2478]: time="2025-10-13T05:37:31.105427928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:37:31.113612 containerd[2478]: time="2025-10-13T05:37:31.113446232Z" level=info msg="CreateContainer within sandbox \"d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:37:31.136811 containerd[2478]: time="2025-10-13T05:37:31.136788183Z" level=info msg="Container 711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:31.142985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698186162.mount: Deactivated successfully. Oct 13 05:37:31.159954 containerd[2478]: time="2025-10-13T05:37:31.159929097Z" level=info msg="CreateContainer within sandbox \"d91396b429a15c44aab7e36163d7c4ecacb8d168d9955bcfe65d7a3998981db6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\"" Oct 13 05:37:31.160874 containerd[2478]: time="2025-10-13T05:37:31.160708632Z" level=info msg="StartContainer for \"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\"" Oct 13 05:37:31.161988 containerd[2478]: time="2025-10-13T05:37:31.161948667Z" level=info msg="connecting to shim 711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac" address="unix:///run/containerd/s/b1ac60a179c8486e7c90fb8509f96fcc98b7ff0ab0727128759b80eb8225be64" protocol=ttrpc version=3 Oct 13 05:37:31.186512 systemd[1]: Started cri-containerd-711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac.scope - libcontainer container 711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac. Oct 13 05:37:31.232516 containerd[2478]: time="2025-10-13T05:37:31.232490145Z" level=info msg="StartContainer for \"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" returns successfully" Oct 13 05:37:31.290469 kubelet[3925]: I1013 05:37:31.290232 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-kxpxd" podStartSLOduration=28.686475809 podStartE2EDuration="37.289363969s" podCreationTimestamp="2025-10-13 05:36:54 +0000 UTC" firstStartedPulling="2025-10-13 05:37:22.501889724 +0000 UTC m=+46.540764737" lastFinishedPulling="2025-10-13 05:37:31.104777895 +0000 UTC m=+55.143652897" observedRunningTime="2025-10-13 05:37:31.288969655 +0000 UTC m=+55.327844678" watchObservedRunningTime="2025-10-13 05:37:31.289363969 +0000 UTC m=+55.328238988" Oct 13 05:37:31.355743 containerd[2478]: time="2025-10-13T05:37:31.355094231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"68fa69604f7df8c09ff98d857c280a5824b5b79477e879cd617d4b394f5c7d7d\" pid:6219 exit_status:1 exited_at:{seconds:1760333851 nanos:354802849}" Oct 13 05:37:32.357638 containerd[2478]: time="2025-10-13T05:37:32.357444608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"80c12b5f283eaef6a845b62f8a1d0a44b0a9b94ed8c120eebd3064075de8b537\" pid:6245 exit_status:1 exited_at:{seconds:1760333852 nanos:356764996}" Oct 13 05:37:32.581563 containerd[2478]: time="2025-10-13T05:37:32.581517526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:32.584267 containerd[2478]: time="2025-10-13T05:37:32.584187902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:37:32.587393 containerd[2478]: time="2025-10-13T05:37:32.587281411Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:32.592208 containerd[2478]: time="2025-10-13T05:37:32.591772832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:37:32.592208 containerd[2478]: time="2025-10-13T05:37:32.592089467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.486423449s" Oct 13 05:37:32.592208 containerd[2478]: time="2025-10-13T05:37:32.592116715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:37:32.599135 containerd[2478]: time="2025-10-13T05:37:32.599098251Z" level=info msg="CreateContainer within sandbox \"d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:37:32.621391 containerd[2478]: time="2025-10-13T05:37:32.620804162Z" level=info msg="Container 201f336aa659704557088f5dee9ad84d1944ac255566997edce3fb109f52d0b1: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:37:32.650735 containerd[2478]: time="2025-10-13T05:37:32.650706049Z" level=info msg="CreateContainer within sandbox \"d495d11331c6f97e8d6107e54ad15e1b70a2259a33f8eccf83ae2be00097345c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"201f336aa659704557088f5dee9ad84d1944ac255566997edce3fb109f52d0b1\"" Oct 13 05:37:32.651437 containerd[2478]: time="2025-10-13T05:37:32.651134342Z" level=info msg="StartContainer for \"201f336aa659704557088f5dee9ad84d1944ac255566997edce3fb109f52d0b1\"" Oct 13 05:37:32.652688 containerd[2478]: time="2025-10-13T05:37:32.652641763Z" level=info msg="connecting to shim 201f336aa659704557088f5dee9ad84d1944ac255566997edce3fb109f52d0b1" address="unix:///run/containerd/s/52329049e43a751f79fc13a107e54ee714f891e0323aea06f8c5409b17a13047" protocol=ttrpc version=3 Oct 13 05:37:32.672575 systemd[1]: Started cri-containerd-201f336aa659704557088f5dee9ad84d1944ac255566997edce3fb109f52d0b1.scope - libcontainer container 201f336aa659704557088f5dee9ad84d1944ac255566997edce3fb109f52d0b1. Oct 13 05:37:32.708973 containerd[2478]: time="2025-10-13T05:37:32.708897319Z" level=info msg="StartContainer for \"201f336aa659704557088f5dee9ad84d1944ac255566997edce3fb109f52d0b1\" returns successfully" Oct 13 05:37:33.156292 kubelet[3925]: I1013 05:37:33.156260 3925 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:37:33.156292 kubelet[3925]: I1013 05:37:33.156289 3925 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:37:33.299073 kubelet[3925]: I1013 05:37:33.297406 3925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kqvx9" podStartSLOduration=27.166113874 podStartE2EDuration="38.297329085s" podCreationTimestamp="2025-10-13 05:36:55 +0000 UTC" firstStartedPulling="2025-10-13 05:37:21.461710818 +0000 UTC m=+45.500585828" lastFinishedPulling="2025-10-13 05:37:32.592926035 +0000 UTC m=+56.631801039" observedRunningTime="2025-10-13 05:37:33.296595223 +0000 UTC m=+57.335470240" watchObservedRunningTime="2025-10-13 05:37:33.297329085 +0000 UTC m=+57.336204116" Oct 13 05:37:33.351183 containerd[2478]: time="2025-10-13T05:37:33.351147352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"beb1625b4051e7f603011f04a4e01499600fed243c4ea5e58475d8ebd6fba3d0\" pid:6307 exit_status:1 exited_at:{seconds:1760333853 nanos:350745140}" Oct 13 05:37:37.364027 containerd[2478]: time="2025-10-13T05:37:37.363983220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\" id:\"d798ad07544125d11bfa2e1341319556c049796598d4af1229277dde1ced6c26\" pid:6339 exited_at:{seconds:1760333857 nanos:363762455}" Oct 13 05:37:38.366192 kubelet[3925]: I1013 05:37:38.365825 3925 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:37:45.262191 containerd[2478]: time="2025-10-13T05:37:45.262150564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\" id:\"a6eaf5a6d9bd47f4792c87e223bb1b920194ec40c6d1f35b6f2564a49f0fe0d0\" pid:6367 exited_at:{seconds:1760333865 nanos:261707620}" Oct 13 05:37:57.318171 containerd[2478]: time="2025-10-13T05:37:57.318072991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\" id:\"1469eb94728eee5cfb78a856002df7759be7f4c9db3edb8caabaab6178a9d4f7\" pid:6400 exited_at:{seconds:1760333877 nanos:317613387}" Oct 13 05:38:03.445122 containerd[2478]: time="2025-10-13T05:38:03.445078834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"8b0d7a0e0a3964c62579480833c5d1f7692cb99809185bfccfc8d6cbc6a729f3\" pid:6425 exited_at:{seconds:1760333883 nanos:444862550}" Oct 13 05:38:05.513544 containerd[2478]: time="2025-10-13T05:38:05.513499283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"64d530820681d141b0c8f2f95e7e125cb62fc69accf2ed2c01be05f7ab2596cf\" pid:6450 exited_at:{seconds:1760333885 nanos:512426871}" Oct 13 05:38:15.293013 containerd[2478]: time="2025-10-13T05:38:15.292754550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\" id:\"fa1f75247396c157a40c0b54da0ab148eef50263f531969e20d12359ffb9aa68\" pid:6474 exited_at:{seconds:1760333895 nanos:290938788}" Oct 13 05:38:23.308978 systemd[1]: Started sshd@7-10.200.8.45:22-10.200.16.10:58022.service - OpenSSH per-connection server daemon (10.200.16.10:58022). Oct 13 05:38:23.964091 sshd[6491]: Accepted publickey for core from 10.200.16.10 port 58022 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:23.965484 sshd-session[6491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:23.970449 systemd-logind[2449]: New session 10 of user core. Oct 13 05:38:23.974546 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:38:24.519365 sshd[6494]: Connection closed by 10.200.16.10 port 58022 Oct 13 05:38:24.521642 sshd-session[6491]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:24.526102 systemd[1]: sshd@7-10.200.8.45:22-10.200.16.10:58022.service: Deactivated successfully. Oct 13 05:38:24.526823 systemd-logind[2449]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:38:24.529679 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:38:24.533017 systemd-logind[2449]: Removed session 10. Oct 13 05:38:27.302566 containerd[2478]: time="2025-10-13T05:38:27.302503229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\" id:\"cdf3dd31d1c17930d7915c421b75e97a4a816d7cda3dd1367457c490d49e6571\" pid:6519 exited_at:{seconds:1760333907 nanos:302118318}" Oct 13 05:38:29.644499 systemd[1]: Started sshd@8-10.200.8.45:22-10.200.16.10:58036.service - OpenSSH per-connection server daemon (10.200.16.10:58036). Oct 13 05:38:30.294829 sshd[6529]: Accepted publickey for core from 10.200.16.10 port 58036 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:30.295991 sshd-session[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:30.302669 systemd-logind[2449]: New session 11 of user core. Oct 13 05:38:30.306544 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:38:30.822875 sshd[6534]: Connection closed by 10.200.16.10 port 58036 Oct 13 05:38:30.822352 sshd-session[6529]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:30.827247 systemd-logind[2449]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:38:30.829201 systemd[1]: sshd@8-10.200.8.45:22-10.200.16.10:58036.service: Deactivated successfully. Oct 13 05:38:30.832589 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:38:30.835517 systemd-logind[2449]: Removed session 11. Oct 13 05:38:33.352754 containerd[2478]: time="2025-10-13T05:38:33.352710004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"2ec3065368b1a1af73527956c7ca63466afb743d9a6f2bd38f9500645e8a3aba\" pid:6559 exited_at:{seconds:1760333913 nanos:351725586}" Oct 13 05:38:35.937702 systemd[1]: Started sshd@9-10.200.8.45:22-10.200.16.10:57302.service - OpenSSH per-connection server daemon (10.200.16.10:57302). Oct 13 05:38:36.580595 sshd[6577]: Accepted publickey for core from 10.200.16.10 port 57302 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:36.581757 sshd-session[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:36.585607 systemd-logind[2449]: New session 12 of user core. Oct 13 05:38:36.591547 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:38:37.082080 sshd[6582]: Connection closed by 10.200.16.10 port 57302 Oct 13 05:38:37.082674 sshd-session[6577]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:37.086156 systemd[1]: sshd@9-10.200.8.45:22-10.200.16.10:57302.service: Deactivated successfully. Oct 13 05:38:37.088116 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:38:37.088837 systemd-logind[2449]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:38:37.091037 systemd-logind[2449]: Removed session 12. Oct 13 05:38:37.205710 systemd[1]: Started sshd@10-10.200.8.45:22-10.200.16.10:57306.service - OpenSSH per-connection server daemon (10.200.16.10:57306). Oct 13 05:38:37.364538 containerd[2478]: time="2025-10-13T05:38:37.364435270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\" id:\"00757c20aeb3d3a6f04a248f031fd2b7ade19af588b033362d23e9fa1e0bd0aa\" pid:6609 exited_at:{seconds:1760333917 nanos:364121862}" Oct 13 05:38:37.845308 sshd[6594]: Accepted publickey for core from 10.200.16.10 port 57306 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:37.846450 sshd-session[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:37.850461 systemd-logind[2449]: New session 13 of user core. Oct 13 05:38:37.856511 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:38:38.375919 sshd[6618]: Connection closed by 10.200.16.10 port 57306 Oct 13 05:38:38.376425 sshd-session[6594]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:38.381159 systemd[1]: sshd@10-10.200.8.45:22-10.200.16.10:57306.service: Deactivated successfully. Oct 13 05:38:38.382027 systemd-logind[2449]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:38:38.383219 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:38:38.384851 systemd-logind[2449]: Removed session 13. Oct 13 05:38:38.500515 systemd[1]: Started sshd@11-10.200.8.45:22-10.200.16.10:57308.service - OpenSSH per-connection server daemon (10.200.16.10:57308). Oct 13 05:38:39.142277 sshd[6627]: Accepted publickey for core from 10.200.16.10 port 57308 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:50.278175 containerd[2478]: time="2025-10-13T05:38:45.257228817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\" id:\"d0f450558cd9a547ceb05031beddba972025985fb1e02e2ba70c9a3fcb766b2a\" pid:6652 exited_at:{seconds:1760333925 nanos:256222936}" Oct 13 05:38:50.279091 sshd-session[6627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:50.289468 systemd-logind[2449]: New session 14 of user core. Oct 13 05:38:50.297525 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:38:50.706470 sshd[6680]: Connection closed by 10.200.16.10 port 57308 Oct 13 05:38:50.707142 sshd-session[6627]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:50.710759 systemd[1]: sshd@11-10.200.8.45:22-10.200.16.10:57308.service: Deactivated successfully. Oct 13 05:38:50.712611 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:38:50.713354 systemd-logind[2449]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:38:50.714475 systemd-logind[2449]: Removed session 14. Oct 13 05:38:55.826496 systemd[1]: Started sshd@12-10.200.8.45:22-10.200.16.10:60554.service - OpenSSH per-connection server daemon (10.200.16.10:60554). Oct 13 05:38:56.482419 sshd[6699]: Accepted publickey for core from 10.200.16.10 port 60554 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:38:56.483561 sshd-session[6699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:38:56.488100 systemd-logind[2449]: New session 15 of user core. Oct 13 05:38:56.496690 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:38:57.062870 sshd[6702]: Connection closed by 10.200.16.10 port 60554 Oct 13 05:38:57.063598 sshd-session[6699]: pam_unix(sshd:session): session closed for user core Oct 13 05:38:57.069836 systemd-logind[2449]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:38:57.072519 systemd[1]: sshd@12-10.200.8.45:22-10.200.16.10:60554.service: Deactivated successfully. Oct 13 05:38:57.075682 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:38:57.078121 systemd-logind[2449]: Removed session 15. Oct 13 05:38:57.309314 containerd[2478]: time="2025-10-13T05:38:57.309268100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\" id:\"bf5b583d1ff445e425b2c6eb1caf7a1fe9e6ba0d3d19d228c24237d630d720ce\" pid:6725 exited_at:{seconds:1760333937 nanos:309051160}" Oct 13 05:39:02.179490 systemd[1]: Started sshd@13-10.200.8.45:22-10.200.16.10:35370.service - OpenSSH per-connection server daemon (10.200.16.10:35370). Oct 13 05:39:02.824109 sshd[6735]: Accepted publickey for core from 10.200.16.10 port 35370 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:02.825265 sshd-session[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:02.829454 systemd-logind[2449]: New session 16 of user core. Oct 13 05:39:02.833508 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:39:03.351511 containerd[2478]: time="2025-10-13T05:39:03.351466411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"fa20839472c0009fce96378826d202cbf6ffb58b7bff4959363822add4922432\" pid:6758 exited_at:{seconds:1760333943 nanos:351223128}" Oct 13 05:39:03.355432 sshd[6738]: Connection closed by 10.200.16.10 port 35370 Oct 13 05:39:03.356897 sshd-session[6735]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:03.360256 systemd-logind[2449]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:39:03.360520 systemd[1]: sshd@13-10.200.8.45:22-10.200.16.10:35370.service: Deactivated successfully. Oct 13 05:39:03.362661 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:39:03.364205 systemd-logind[2449]: Removed session 16. Oct 13 05:39:03.468526 systemd[1]: Started sshd@14-10.200.8.45:22-10.200.16.10:35382.service - OpenSSH per-connection server daemon (10.200.16.10:35382). Oct 13 05:39:04.111851 sshd[6772]: Accepted publickey for core from 10.200.16.10 port 35382 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:04.113034 sshd-session[6772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:04.117514 systemd-logind[2449]: New session 17 of user core. Oct 13 05:39:04.122527 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:39:04.674947 sshd[6775]: Connection closed by 10.200.16.10 port 35382 Oct 13 05:39:04.675478 sshd-session[6772]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:04.678882 systemd[1]: sshd@14-10.200.8.45:22-10.200.16.10:35382.service: Deactivated successfully. Oct 13 05:39:04.680747 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:39:04.682076 systemd-logind[2449]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:39:04.683224 systemd-logind[2449]: Removed session 17. Oct 13 05:39:04.788252 systemd[1]: Started sshd@15-10.200.8.45:22-10.200.16.10:35394.service - OpenSSH per-connection server daemon (10.200.16.10:35394). Oct 13 05:39:05.333413 containerd[2478]: time="2025-10-13T05:39:05.333308801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"cfaadd8c5328f75a50e7e88ceea2bab0fa0e9dd313f9fe6b598fc53608c765d7\" pid:6799 exited_at:{seconds:1760333945 nanos:333048984}" Oct 13 05:39:05.432673 sshd[6784]: Accepted publickey for core from 10.200.16.10 port 35394 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:05.433780 sshd-session[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:05.438320 systemd-logind[2449]: New session 18 of user core. Oct 13 05:39:05.443544 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:39:06.528353 sshd[6809]: Connection closed by 10.200.16.10 port 35394 Oct 13 05:39:06.528912 sshd-session[6784]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:06.532040 systemd[1]: sshd@15-10.200.8.45:22-10.200.16.10:35394.service: Deactivated successfully. Oct 13 05:39:06.534738 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:39:06.538294 systemd-logind[2449]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:39:06.540098 systemd-logind[2449]: Removed session 18. Oct 13 05:39:06.638455 systemd[1]: Started sshd@16-10.200.8.45:22-10.200.16.10:35410.service - OpenSSH per-connection server daemon (10.200.16.10:35410). Oct 13 05:39:07.280542 sshd[6824]: Accepted publickey for core from 10.200.16.10 port 35410 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:07.283721 sshd-session[6824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:07.288167 systemd-logind[2449]: New session 19 of user core. Oct 13 05:39:07.291526 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:39:07.860303 sshd[6827]: Connection closed by 10.200.16.10 port 35410 Oct 13 05:39:07.861566 sshd-session[6824]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:07.865055 systemd-logind[2449]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:39:07.865320 systemd[1]: sshd@16-10.200.8.45:22-10.200.16.10:35410.service: Deactivated successfully. Oct 13 05:39:07.867478 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:39:07.868993 systemd-logind[2449]: Removed session 19. Oct 13 05:39:07.980436 systemd[1]: Started sshd@17-10.200.8.45:22-10.200.16.10:35418.service - OpenSSH per-connection server daemon (10.200.16.10:35418). Oct 13 05:39:08.622515 sshd[6839]: Accepted publickey for core from 10.200.16.10 port 35418 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:08.623689 sshd-session[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:08.627466 systemd-logind[2449]: New session 20 of user core. Oct 13 05:39:08.634544 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:39:09.129936 sshd[6842]: Connection closed by 10.200.16.10 port 35418 Oct 13 05:39:09.130494 sshd-session[6839]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:09.134131 systemd[1]: sshd@17-10.200.8.45:22-10.200.16.10:35418.service: Deactivated successfully. Oct 13 05:39:09.136490 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:39:09.137661 systemd-logind[2449]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:39:09.138548 systemd-logind[2449]: Removed session 20. Oct 13 05:39:14.240449 systemd[1]: Started sshd@18-10.200.8.45:22-10.200.16.10:35374.service - OpenSSH per-connection server daemon (10.200.16.10:35374). Oct 13 05:39:14.889882 sshd[6858]: Accepted publickey for core from 10.200.16.10 port 35374 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:14.891400 sshd-session[6858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:14.895971 systemd-logind[2449]: New session 21 of user core. Oct 13 05:39:14.903666 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:39:15.349106 containerd[2478]: time="2025-10-13T05:39:15.348983146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d1e97731be76b97308f95900dda1ffaf37fa5611f5fbe45918c3491b740b4a\" id:\"1101aeafa9e23a0e8266e562b565df6051497b488e014d5f6f9b1efb0eb6d3b7\" pid:6874 exited_at:{seconds:1760333955 nanos:348486813}" Oct 13 05:39:15.415997 sshd[6861]: Connection closed by 10.200.16.10 port 35374 Oct 13 05:39:15.416553 sshd-session[6858]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:15.422855 systemd-logind[2449]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:39:15.423729 systemd[1]: sshd@18-10.200.8.45:22-10.200.16.10:35374.service: Deactivated successfully. Oct 13 05:39:15.426348 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:39:15.432801 systemd-logind[2449]: Removed session 21. Oct 13 05:39:20.528745 systemd[1]: Started sshd@19-10.200.8.45:22-10.200.16.10:35876.service - OpenSSH per-connection server daemon (10.200.16.10:35876). Oct 13 05:39:21.173037 sshd[6898]: Accepted publickey for core from 10.200.16.10 port 35876 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:21.174182 sshd-session[6898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:21.178795 systemd-logind[2449]: New session 22 of user core. Oct 13 05:39:21.182516 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:39:21.671201 sshd[6901]: Connection closed by 10.200.16.10 port 35876 Oct 13 05:39:21.671777 sshd-session[6898]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:21.675261 systemd[1]: sshd@19-10.200.8.45:22-10.200.16.10:35876.service: Deactivated successfully. Oct 13 05:39:21.676988 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:39:21.677844 systemd-logind[2449]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:39:21.678901 systemd-logind[2449]: Removed session 22. Oct 13 05:39:26.788625 systemd[1]: Started sshd@20-10.200.8.45:22-10.200.16.10:35886.service - OpenSSH per-connection server daemon (10.200.16.10:35886). Oct 13 05:39:27.300670 containerd[2478]: time="2025-10-13T05:39:27.300628634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fd2a3b78fd2af19816f8117b7ecfd0c24fcca52c85ab23747a7b54804083253\" id:\"0b7f6ef0ee29d42c2258636bd00757dc592dc535cbffdd4712a65ee8f802527f\" pid:6930 exited_at:{seconds:1760333967 nanos:300310827}" Oct 13 05:39:27.438681 sshd[6914]: Accepted publickey for core from 10.200.16.10 port 35886 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:27.439826 sshd-session[6914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:27.444417 systemd-logind[2449]: New session 23 of user core. Oct 13 05:39:27.448529 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:39:27.977939 sshd[6939]: Connection closed by 10.200.16.10 port 35886 Oct 13 05:39:27.981551 sshd-session[6914]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:27.985281 systemd-logind[2449]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:39:27.986263 systemd[1]: sshd@20-10.200.8.45:22-10.200.16.10:35886.service: Deactivated successfully. Oct 13 05:39:27.990507 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:39:27.994419 systemd-logind[2449]: Removed session 23. Oct 13 05:39:33.092827 systemd[1]: Started sshd@21-10.200.8.45:22-10.200.16.10:55976.service - OpenSSH per-connection server daemon (10.200.16.10:55976). Oct 13 05:39:33.343871 containerd[2478]: time="2025-10-13T05:39:33.343732813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"711aa1a47e32da1aaf8acab8b00712affb0376c1d0ae88c47727d72a99946fac\" id:\"f03dc22a17eda53ecb61c6bb2818ecacd8425eabca046a4da2afb874359698eb\" pid:6967 exited_at:{seconds:1760333973 nanos:343498322}" Oct 13 05:39:33.741676 sshd[6951]: Accepted publickey for core from 10.200.16.10 port 55976 ssh2: RSA SHA256:AbXmOVBj6dCWUgMVoY4vqYS1kqBAdKFjDdmWPRs43lM Oct 13 05:39:33.743159 sshd-session[6951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:39:33.747854 systemd-logind[2449]: New session 24 of user core. Oct 13 05:39:33.756504 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:39:34.246990 sshd[6977]: Connection closed by 10.200.16.10 port 55976 Oct 13 05:39:34.248568 sshd-session[6951]: pam_unix(sshd:session): session closed for user core Oct 13 05:39:34.252428 systemd-logind[2449]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:39:34.252804 systemd[1]: sshd@21-10.200.8.45:22-10.200.16.10:55976.service: Deactivated successfully. Oct 13 05:39:34.255060 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:39:34.256578 systemd-logind[2449]: Removed session 24.