Sep 16 04:56:51.036564 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:56:51.036591 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:56:51.036602 kernel: BIOS-provided physical RAM map: Sep 16 04:56:51.036609 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:56:51.036614 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 16 04:56:51.036621 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 16 04:56:51.036629 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 16 04:56:51.036637 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 16 04:56:51.036643 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 16 04:56:51.036650 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 16 04:56:51.036656 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 16 04:56:51.036663 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 16 04:56:51.036669 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 16 04:56:51.036676 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 16 04:56:51.036686 kernel: NX (Execute Disable) protection: active Sep 16 04:56:51.036693 kernel: APIC: Static calls initialized Sep 16 04:56:51.036700 kernel: efi: EFI v2.7 by Microsoft Sep 16 04:56:51.036707 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9da718 RNG=0x3ffd2018 Sep 16 04:56:51.036714 kernel: random: crng init done Sep 16 04:56:51.036721 kernel: secureboot: Secure boot disabled Sep 16 04:56:51.036728 kernel: SMBIOS 3.1.0 present. Sep 16 04:56:51.036735 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 16 04:56:51.036742 kernel: DMI: Memory slots populated: 2/2 Sep 16 04:56:51.036751 kernel: Hypervisor detected: Microsoft Hyper-V Sep 16 04:56:51.036758 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 16 04:56:51.036765 kernel: Hyper-V: Nested features: 0x3e0101 Sep 16 04:56:51.036772 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 16 04:56:51.036779 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 16 04:56:51.036800 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 16 04:56:51.036809 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 16 04:56:51.036816 kernel: tsc: Detected 2299.999 MHz processor Sep 16 04:56:51.036823 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:56:51.036832 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:56:51.036839 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 16 04:56:51.036849 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 04:56:51.036857 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:56:51.036865 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 16 04:56:51.036872 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 16 04:56:51.036880 kernel: Using GB pages for direct mapping Sep 16 04:56:51.036889 kernel: ACPI: Early table checksum verification disabled Sep 16 04:56:51.036901 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 16 04:56:51.036912 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:51.036921 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:51.036930 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 16 04:56:51.036939 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 16 04:56:51.036949 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:51.036958 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:51.036967 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:51.036975 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 16 04:56:51.036984 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 16 04:56:51.036993 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:51.037002 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 16 04:56:51.037011 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 16 04:56:51.037020 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 16 04:56:51.037029 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 16 04:56:51.037038 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 16 04:56:51.037049 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 16 04:56:51.037058 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 16 04:56:51.037067 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 16 04:56:51.037076 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 16 04:56:51.037085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 16 04:56:51.037094 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 16 04:56:51.037103 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 16 04:56:51.037112 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 16 04:56:51.037121 kernel: Zone ranges: Sep 16 04:56:51.037132 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:56:51.037141 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 16 04:56:51.037149 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 16 04:56:51.037158 kernel: Device empty Sep 16 04:56:51.037165 kernel: Movable zone start for each node Sep 16 04:56:51.037173 kernel: Early memory node ranges Sep 16 04:56:51.037181 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 16 04:56:51.037188 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 16 04:56:51.037196 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 16 04:56:51.037205 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 16 04:56:51.037212 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 16 04:56:51.037220 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 16 04:56:51.037227 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:56:51.037235 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 16 04:56:51.037242 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 16 04:56:51.037250 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 16 04:56:51.037257 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 16 04:56:51.037265 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 04:56:51.037274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:56:51.037281 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:56:51.037289 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 16 04:56:51.037297 kernel: TSC deadline timer available Sep 16 04:56:51.037305 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:56:51.037313 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:56:51.037321 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:56:51.037329 kernel: CPU topo: Max. threads per core: 2 Sep 16 04:56:51.037337 kernel: CPU topo: Num. cores per package: 1 Sep 16 04:56:51.037347 kernel: CPU topo: Num. threads per package: 2 Sep 16 04:56:51.037354 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 04:56:51.037361 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 16 04:56:51.037368 kernel: Booting paravirtualized kernel on Hyper-V Sep 16 04:56:51.037376 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:56:51.037383 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 04:56:51.037390 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 04:56:51.037398 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 04:56:51.037405 kernel: pcpu-alloc: [0] 0 1 Sep 16 04:56:51.037413 kernel: Hyper-V: PV spinlocks enabled Sep 16 04:56:51.037421 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 04:56:51.037429 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:56:51.037438 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:56:51.037446 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 16 04:56:51.037453 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:56:51.037461 kernel: Fallback order for Node 0: 0 Sep 16 04:56:51.037469 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 16 04:56:51.037478 kernel: Policy zone: Normal Sep 16 04:56:51.037485 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:56:51.037492 kernel: software IO TLB: area num 2. Sep 16 04:56:51.037500 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:56:51.037508 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:56:51.037515 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:56:51.037523 kernel: Dynamic Preempt: voluntary Sep 16 04:56:51.037530 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:56:51.037539 kernel: rcu: RCU event tracing is enabled. Sep 16 04:56:51.037554 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:56:51.037562 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:56:51.037570 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:56:51.037580 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:56:51.037588 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:56:51.037596 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:56:51.037605 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:56:51.037613 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:56:51.037621 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:56:51.037629 kernel: Using NULL legacy PIC Sep 16 04:56:51.037639 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 16 04:56:51.037647 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:56:51.037655 kernel: Console: colour dummy device 80x25 Sep 16 04:56:51.037663 kernel: printk: legacy console [tty1] enabled Sep 16 04:56:51.037671 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:56:51.037679 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 16 04:56:51.037687 kernel: ACPI: Core revision 20240827 Sep 16 04:56:51.037697 kernel: Failed to register legacy timer interrupt Sep 16 04:56:51.037705 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:56:51.037713 kernel: x2apic enabled Sep 16 04:56:51.037721 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:56:51.037729 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 16 04:56:51.037737 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 16 04:56:51.037745 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 16 04:56:51.037753 kernel: Hyper-V: Using IPI hypercalls Sep 16 04:56:51.037762 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 16 04:56:51.037771 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 16 04:56:51.037780 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 16 04:56:51.040087 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 16 04:56:51.040102 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 16 04:56:51.040110 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 16 04:56:51.040119 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 16 04:56:51.040127 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Sep 16 04:56:51.040135 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 16 04:56:51.040143 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 16 04:56:51.040155 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 16 04:56:51.040163 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:56:51.040171 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:56:51.040179 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:56:51.040187 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 16 04:56:51.040195 kernel: RETBleed: Vulnerable Sep 16 04:56:51.040202 kernel: Speculative Store Bypass: Vulnerable Sep 16 04:56:51.040210 kernel: active return thunk: its_return_thunk Sep 16 04:56:51.040218 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 16 04:56:51.040225 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:56:51.040233 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:56:51.040242 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:56:51.040250 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 16 04:56:51.040258 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 16 04:56:51.040266 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 16 04:56:51.040273 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 16 04:56:51.040281 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 16 04:56:51.040289 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 16 04:56:51.040297 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:56:51.040305 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 16 04:56:51.040313 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 16 04:56:51.040322 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 16 04:56:51.040330 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 16 04:56:51.040338 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 16 04:56:51.040346 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 16 04:56:51.040354 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 16 04:56:51.040362 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:56:51.040370 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:56:51.040378 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:56:51.040386 kernel: landlock: Up and running. Sep 16 04:56:51.040394 kernel: SELinux: Initializing. Sep 16 04:56:51.040402 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:56:51.040410 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:56:51.040420 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 16 04:56:51.040429 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 16 04:56:51.040438 kernel: signal: max sigframe size: 11952 Sep 16 04:56:51.040446 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:56:51.040455 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:56:51.040464 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:56:51.040472 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 16 04:56:51.040481 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:56:51.040489 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:56:51.040499 kernel: .... node #0, CPUs: #1 Sep 16 04:56:51.040508 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:56:51.040517 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 16 04:56:51.040527 kernel: Memory: 8077032K/8383228K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 299988K reserved, 0K cma-reserved) Sep 16 04:56:51.040536 kernel: devtmpfs: initialized Sep 16 04:56:51.040544 kernel: x86/mm: Memory block size: 128MB Sep 16 04:56:51.040553 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 16 04:56:51.040561 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:56:51.040569 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:56:51.040579 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:56:51.040587 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:56:51.040595 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:56:51.040603 kernel: audit: type=2000 audit(1757998607.031:1): state=initialized audit_enabled=0 res=1 Sep 16 04:56:51.040611 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:56:51.040620 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:56:51.040628 kernel: cpuidle: using governor menu Sep 16 04:56:51.040637 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:56:51.040645 kernel: dca service started, version 1.12.1 Sep 16 04:56:51.040656 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 16 04:56:51.040665 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 16 04:56:51.040674 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:56:51.040684 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:56:51.040692 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:56:51.040701 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:56:51.040711 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:56:51.040720 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:56:51.040729 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:56:51.040739 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:56:51.040749 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:56:51.040758 kernel: ACPI: Interpreter enabled Sep 16 04:56:51.040766 kernel: ACPI: PM: (supports S0 S5) Sep 16 04:56:51.040775 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:56:51.040784 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:56:51.040807 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 16 04:56:51.040817 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 16 04:56:51.040825 kernel: iommu: Default domain type: Translated Sep 16 04:56:51.040836 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:56:51.040845 kernel: efivars: Registered efivars operations Sep 16 04:56:51.040853 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:56:51.040863 kernel: PCI: System does not support PCI Sep 16 04:56:51.040872 kernel: vgaarb: loaded Sep 16 04:56:51.040880 kernel: clocksource: Switched to clocksource tsc-early Sep 16 04:56:51.040890 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:56:51.040899 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:56:51.040908 kernel: pnp: PnP ACPI init Sep 16 04:56:51.040918 kernel: pnp: PnP ACPI: found 3 devices Sep 16 04:56:51.040927 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:56:51.040937 kernel: NET: Registered PF_INET protocol family Sep 16 04:56:51.040946 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 04:56:51.040955 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 16 04:56:51.040964 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:56:51.040973 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:56:51.040982 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 16 04:56:51.040992 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 16 04:56:51.041002 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 04:56:51.041011 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 04:56:51.041021 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:56:51.041030 kernel: NET: Registered PF_XDP protocol family Sep 16 04:56:51.041039 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:56:51.041048 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 16 04:56:51.041057 kernel: software IO TLB: mapped [mem 0x000000003a9da000-0x000000003e9da000] (64MB) Sep 16 04:56:51.041099 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 16 04:56:51.041110 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 16 04:56:51.041120 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 16 04:56:51.041130 kernel: clocksource: Switched to clocksource tsc Sep 16 04:56:51.041140 kernel: Initialise system trusted keyrings Sep 16 04:56:51.041148 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 16 04:56:51.041157 kernel: Key type asymmetric registered Sep 16 04:56:51.041166 kernel: Asymmetric key parser 'x509' registered Sep 16 04:56:51.041175 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:56:51.041184 kernel: io scheduler mq-deadline registered Sep 16 04:56:51.041194 kernel: io scheduler kyber registered Sep 16 04:56:51.041203 kernel: io scheduler bfq registered Sep 16 04:56:51.041212 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:56:51.041221 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:56:51.041230 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:56:51.041238 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 16 04:56:51.041247 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:56:51.041256 kernel: i8042: PNP: No PS/2 controller found. Sep 16 04:56:51.041395 kernel: rtc_cmos 00:02: registered as rtc0 Sep 16 04:56:51.041474 kernel: rtc_cmos 00:02: setting system clock to 2025-09-16T04:56:50 UTC (1757998610) Sep 16 04:56:51.041544 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 16 04:56:51.041554 kernel: intel_pstate: Intel P-state driver initializing Sep 16 04:56:51.041563 kernel: efifb: probing for efifb Sep 16 04:56:51.041572 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 16 04:56:51.041580 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 16 04:56:51.041589 kernel: efifb: scrolling: redraw Sep 16 04:56:51.041597 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:56:51.041608 kernel: Console: switching to colour frame buffer device 128x48 Sep 16 04:56:51.041616 kernel: fb0: EFI VGA frame buffer device Sep 16 04:56:51.041624 kernel: pstore: Using crash dump compression: deflate Sep 16 04:56:51.041633 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 04:56:51.041641 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:56:51.041650 kernel: Segment Routing with IPv6 Sep 16 04:56:51.041658 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:56:51.041667 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:56:51.041675 kernel: Key type dns_resolver registered Sep 16 04:56:51.041683 kernel: IPI shorthand broadcast: enabled Sep 16 04:56:51.041694 kernel: sched_clock: Marking stable (3228004460, 103588835)->(3703482717, -371889422) Sep 16 04:56:51.041704 kernel: registered taskstats version 1 Sep 16 04:56:51.041712 kernel: Loading compiled-in X.509 certificates Sep 16 04:56:51.041721 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:56:51.041729 kernel: Demotion targets for Node 0: null Sep 16 04:56:51.041738 kernel: Key type .fscrypt registered Sep 16 04:56:51.041746 kernel: Key type fscrypt-provisioning registered Sep 16 04:56:51.041755 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:56:51.041766 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:56:51.041774 kernel: ima: No architecture policies found Sep 16 04:56:51.041783 kernel: clk: Disabling unused clocks Sep 16 04:56:51.041814 kernel: Warning: unable to open an initial console. Sep 16 04:56:51.041823 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:56:51.041832 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:56:51.041841 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:56:51.041850 kernel: Run /init as init process Sep 16 04:56:51.041859 kernel: with arguments: Sep 16 04:56:51.041870 kernel: /init Sep 16 04:56:51.041879 kernel: with environment: Sep 16 04:56:51.041888 kernel: HOME=/ Sep 16 04:56:51.041896 kernel: TERM=linux Sep 16 04:56:51.041904 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:56:51.041914 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:56:51.041927 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:56:51.041938 systemd[1]: Detected virtualization microsoft. Sep 16 04:56:51.041949 systemd[1]: Detected architecture x86-64. Sep 16 04:56:51.041958 systemd[1]: Running in initrd. Sep 16 04:56:51.041968 systemd[1]: No hostname configured, using default hostname. Sep 16 04:56:51.041977 systemd[1]: Hostname set to . Sep 16 04:56:51.041987 systemd[1]: Initializing machine ID from random generator. Sep 16 04:56:51.041997 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:56:51.042006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:56:51.042016 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:56:51.042028 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:56:51.042038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:56:51.042048 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:56:51.042059 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:56:51.042070 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:56:51.042079 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:56:51.042089 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:56:51.042101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:56:51.042111 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:56:51.042120 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:56:51.042130 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:56:51.042139 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:56:51.042149 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:56:51.042158 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:56:51.042168 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:56:51.042179 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:56:51.042189 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:56:51.042199 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:56:51.042209 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:56:51.042219 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:56:51.042229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:56:51.042239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:56:51.042249 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:56:51.042260 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:56:51.042272 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:56:51.042282 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:56:51.042302 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:56:51.042314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:56:51.042324 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:56:51.042337 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:56:51.042368 systemd-journald[205]: Collecting audit messages is disabled. Sep 16 04:56:51.042396 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:56:51.042408 systemd-journald[205]: Journal started Sep 16 04:56:51.042433 systemd-journald[205]: Runtime Journal (/run/log/journal/04afe735896f412b8fe040042a3f81b6) is 8M, max 158.9M, 150.9M free. Sep 16 04:56:51.017117 systemd-modules-load[206]: Inserted module 'overlay' Sep 16 04:56:51.049122 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:56:51.056922 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:56:51.057093 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:56:51.063149 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:56:51.071143 kernel: Bridge firewalling registered Sep 16 04:56:51.071412 systemd-modules-load[206]: Inserted module 'br_netfilter' Sep 16 04:56:51.071944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:51.074168 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:56:51.083889 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:56:51.088881 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:56:51.092937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:56:51.103175 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:56:51.108144 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:56:51.108642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:56:51.122943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:56:51.127926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:56:51.130910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:56:51.139412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:56:51.144699 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:56:51.166731 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:56:51.182032 systemd-resolved[241]: Positive Trust Anchors: Sep 16 04:56:51.184000 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:56:51.184043 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:56:51.193655 systemd-resolved[241]: Defaulting to hostname 'linux'. Sep 16 04:56:51.194518 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:56:51.203161 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:56:51.247815 kernel: SCSI subsystem initialized Sep 16 04:56:51.255805 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:56:51.265806 kernel: iscsi: registered transport (tcp) Sep 16 04:56:51.283814 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:56:51.283856 kernel: QLogic iSCSI HBA Driver Sep 16 04:56:51.297331 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:56:51.316376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:56:51.323028 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:56:51.354199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:56:51.358931 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:56:51.406806 kernel: raid6: avx512x4 gen() 42553 MB/s Sep 16 04:56:51.424803 kernel: raid6: avx512x2 gen() 42402 MB/s Sep 16 04:56:51.441800 kernel: raid6: avx512x1 gen() 25363 MB/s Sep 16 04:56:51.458799 kernel: raid6: avx2x4 gen() 35957 MB/s Sep 16 04:56:51.477799 kernel: raid6: avx2x2 gen() 37529 MB/s Sep 16 04:56:51.496001 kernel: raid6: avx2x1 gen() 29844 MB/s Sep 16 04:56:51.496023 kernel: raid6: using algorithm avx512x4 gen() 42553 MB/s Sep 16 04:56:51.514373 kernel: raid6: .... xor() 7427 MB/s, rmw enabled Sep 16 04:56:51.514397 kernel: raid6: using avx512x2 recovery algorithm Sep 16 04:56:51.533813 kernel: xor: automatically using best checksumming function avx Sep 16 04:56:51.657812 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:56:51.663317 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:56:51.665928 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:56:51.687636 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 16 04:56:51.691753 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:56:51.700694 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:56:51.722177 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Sep 16 04:56:51.741188 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:56:51.743904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:56:51.775477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:56:51.781675 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:56:51.837113 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:56:51.843555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:56:51.843666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:51.851189 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:56:51.856937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:56:51.870529 kernel: hv_vmbus: Vmbus version:5.3 Sep 16 04:56:51.870566 kernel: AES CTR mode by8 optimization enabled Sep 16 04:56:51.876201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:56:51.878646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:51.884627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:56:51.898403 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 16 04:56:51.898437 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 16 04:56:51.912808 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 16 04:56:51.917806 kernel: hv_vmbus: registering driver hv_pci Sep 16 04:56:51.922911 kernel: PTP clock support registered Sep 16 04:56:51.928805 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 16 04:56:51.937912 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 16 04:56:51.940323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:51.951805 kernel: hv_utils: Registering HyperV Utility Driver Sep 16 04:56:51.951845 kernel: hv_vmbus: registering driver hv_utils Sep 16 04:56:51.951858 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 16 04:56:51.956050 kernel: hv_utils: Shutdown IC version 3.2 Sep 16 04:56:51.956081 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 16 04:56:51.956223 kernel: hv_utils: Heartbeat IC version 3.0 Sep 16 04:56:51.960489 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 16 04:56:51.960692 kernel: hv_utils: TimeSync IC version 4.0 Sep 16 04:56:51.509278 systemd-resolved[241]: Clock change detected. Flushing caches. Sep 16 04:56:51.516892 systemd-journald[205]: Time jumped backwards, rotating. Sep 16 04:56:51.516937 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:56:51.522096 kernel: hv_vmbus: registering driver hid_hyperv Sep 16 04:56:51.525050 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 16 04:56:51.525060 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 16 04:56:51.525192 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 16 04:56:51.532331 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 16 04:56:51.541076 kernel: hv_vmbus: registering driver hv_storvsc Sep 16 04:56:51.544685 kernel: scsi host0: storvsc_host_t Sep 16 04:56:51.544827 kernel: hv_vmbus: registering driver hv_netvsc Sep 16 04:56:51.546262 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 16 04:56:51.556004 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 16 04:56:51.565106 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 16 04:56:51.565570 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 16 04:56:51.565716 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d41aacc (unnamed net_device) (uninitialized): VF slot 1 added Sep 16 04:56:51.565848 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:56:51.568002 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 16 04:56:51.591823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:56:51.592011 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 16 04:56:51.595004 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 16 04:56:51.614008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#28 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:56:51.756028 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 16 04:56:51.760000 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:56:52.061014 kernel: nvme nvme0: using unchecked data buffer Sep 16 04:56:52.289192 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 16 04:56:52.307150 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 16 04:56:52.307540 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 16 04:56:52.325915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 16 04:56:52.326219 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:56:52.338072 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 16 04:56:52.345816 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:56:52.346053 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:56:52.353270 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:56:52.358602 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:56:52.366099 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:56:52.376006 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:56:52.396193 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:56:52.587873 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 16 04:56:52.588092 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 16 04:56:52.590916 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 16 04:56:52.592457 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 16 04:56:52.598142 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 16 04:56:52.602054 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 16 04:56:52.608045 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 16 04:56:52.608074 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 16 04:56:52.628021 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 16 04:56:52.628213 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 16 04:56:52.632179 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 16 04:56:52.640975 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 16 04:56:52.653014 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 16 04:56:52.656147 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d41aacc eth0: VF registering: eth1 Sep 16 04:56:52.656321 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 16 04:56:52.661009 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 16 04:56:53.388016 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:56:53.389280 disk-uuid[672]: The operation has completed successfully. Sep 16 04:56:53.458701 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:56:53.458793 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:56:53.491284 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:56:53.501081 sh[715]: Success Sep 16 04:56:53.532492 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:56:53.532535 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:56:53.533396 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:56:53.542010 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 16 04:56:53.775600 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:56:53.782080 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:56:53.789177 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:56:53.803981 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (728) Sep 16 04:56:53.804025 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:56:53.805023 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:56:54.113581 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:56:54.113668 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:56:54.113679 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:56:54.147534 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:56:54.150604 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:56:54.154109 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:56:54.154862 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:56:54.166517 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:56:54.189003 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (761) Sep 16 04:56:54.194678 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:54.194716 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:56:54.241435 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:56:54.247066 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:56:54.247087 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 16 04:56:54.247097 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:56:54.253109 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:54.253039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:56:54.258801 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:56:54.269111 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:56:54.290065 systemd-networkd[895]: lo: Link UP Sep 16 04:56:54.290074 systemd-networkd[895]: lo: Gained carrier Sep 16 04:56:54.299301 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 16 04:56:54.299499 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 16 04:56:54.299757 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d41aacc eth0: Data path switched to VF: enP30832s1 Sep 16 04:56:54.291401 systemd-networkd[895]: Enumeration completed Sep 16 04:56:54.291774 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:56:54.291777 systemd-networkd[895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:56:54.292211 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:56:54.292302 systemd[1]: Reached target network.target - Network. Sep 16 04:56:54.301677 systemd-networkd[895]: enP30832s1: Link UP Sep 16 04:56:54.301744 systemd-networkd[895]: eth0: Link UP Sep 16 04:56:54.301828 systemd-networkd[895]: eth0: Gained carrier Sep 16 04:56:54.301840 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:56:54.306267 systemd-networkd[895]: enP30832s1: Gained carrier Sep 16 04:56:54.323041 systemd-networkd[895]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 16 04:56:55.529147 ignition[898]: Ignition 2.22.0 Sep 16 04:56:55.529162 ignition[898]: Stage: fetch-offline Sep 16 04:56:55.531598 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:56:55.529263 ignition[898]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:55.534872 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:56:55.529270 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:55.529355 ignition[898]: parsed url from cmdline: "" Sep 16 04:56:55.529359 ignition[898]: no config URL provided Sep 16 04:56:55.529363 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:56:55.529369 ignition[898]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:56:55.529374 ignition[898]: failed to fetch config: resource requires networking Sep 16 04:56:55.529622 ignition[898]: Ignition finished successfully Sep 16 04:56:55.573276 ignition[906]: Ignition 2.22.0 Sep 16 04:56:55.573287 ignition[906]: Stage: fetch Sep 16 04:56:55.573491 ignition[906]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:55.573499 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:55.573575 ignition[906]: parsed url from cmdline: "" Sep 16 04:56:55.573578 ignition[906]: no config URL provided Sep 16 04:56:55.573583 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:56:55.573589 ignition[906]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:56:55.573614 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 16 04:56:55.636444 ignition[906]: GET result: OK Sep 16 04:56:55.636515 ignition[906]: config has been read from IMDS userdata Sep 16 04:56:55.636543 ignition[906]: parsing config with SHA512: 38076c890c6cc86209968275b0094d94f18772c4e8c536884f2b26d1a8812cb5a7356abe958edfaf4f1c4cec405888f0d6978d511321c405ebe3cf601ac04465 Sep 16 04:56:55.640246 unknown[906]: fetched base config from "system" Sep 16 04:56:55.640255 unknown[906]: fetched base config from "system" Sep 16 04:56:55.640574 ignition[906]: fetch: fetch complete Sep 16 04:56:55.640260 unknown[906]: fetched user config from "azure" Sep 16 04:56:55.640579 ignition[906]: fetch: fetch passed Sep 16 04:56:55.643132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:56:55.640614 ignition[906]: Ignition finished successfully Sep 16 04:56:55.644412 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:56:55.668736 ignition[912]: Ignition 2.22.0 Sep 16 04:56:55.668746 ignition[912]: Stage: kargs Sep 16 04:56:55.668969 ignition[912]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:55.671329 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:56:55.668978 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:55.670080 ignition[912]: kargs: kargs passed Sep 16 04:56:55.676099 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:56:55.670122 ignition[912]: Ignition finished successfully Sep 16 04:56:55.699982 ignition[918]: Ignition 2.22.0 Sep 16 04:56:55.700221 ignition[918]: Stage: disks Sep 16 04:56:55.700460 ignition[918]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:55.700469 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:55.703580 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:56:55.701614 ignition[918]: disks: disks passed Sep 16 04:56:55.707578 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:56:55.701657 ignition[918]: Ignition finished successfully Sep 16 04:56:55.710263 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:56:55.714765 systemd-networkd[895]: eth0: Gained IPv6LL Sep 16 04:56:55.715269 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:56:55.719059 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:56:55.723043 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:56:55.728716 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:56:55.797967 systemd-fsck[926]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 16 04:56:55.802360 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:56:55.805602 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:56:57.902007 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:56:57.902926 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:56:57.906550 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:56:57.940059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:56:57.958554 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:56:57.962205 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 16 04:56:57.970118 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:56:57.976082 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (935) Sep 16 04:56:57.970183 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:56:57.978977 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:57.979008 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:56:57.987307 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:56:57.989665 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:56:57.997438 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:56:57.997478 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 16 04:56:57.998793 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:56:58.000420 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:56:58.510855 coreos-metadata[937]: Sep 16 04:56:58.510 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 16 04:56:58.520081 coreos-metadata[937]: Sep 16 04:56:58.515 INFO Fetch successful Sep 16 04:56:58.520081 coreos-metadata[937]: Sep 16 04:56:58.515 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 16 04:56:58.523965 coreos-metadata[937]: Sep 16 04:56:58.523 INFO Fetch successful Sep 16 04:56:58.538497 coreos-metadata[937]: Sep 16 04:56:58.538 INFO wrote hostname ci-4459.0.0-n-f9a9538521 to /sysroot/etc/hostname Sep 16 04:56:58.541796 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:56:58.776464 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:56:58.823182 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:56:58.842321 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:56:58.859745 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:56:59.835715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:56:59.839961 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:56:59.847495 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:56:59.856588 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:56:59.858952 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:59.892515 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:56:59.895561 ignition[1054]: INFO : Ignition 2.22.0 Sep 16 04:56:59.895561 ignition[1054]: INFO : Stage: mount Sep 16 04:56:59.895561 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:59.895561 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:59.895561 ignition[1054]: INFO : mount: mount passed Sep 16 04:56:59.895561 ignition[1054]: INFO : Ignition finished successfully Sep 16 04:56:59.896156 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:56:59.901069 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:56:59.920190 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:56:59.940006 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1067) Sep 16 04:56:59.940046 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:59.942045 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:56:59.947401 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:56:59.947432 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 16 04:56:59.948733 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:56:59.950677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:56:59.982051 ignition[1084]: INFO : Ignition 2.22.0 Sep 16 04:56:59.982051 ignition[1084]: INFO : Stage: files Sep 16 04:56:59.985833 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:59.985833 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:59.985833 ignition[1084]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:56:59.998746 ignition[1084]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:56:59.998746 ignition[1084]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:57:00.073654 ignition[1084]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:57:00.077061 ignition[1084]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:57:00.077061 ignition[1084]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:57:00.074062 unknown[1084]: wrote ssh authorized keys file for user: core Sep 16 04:57:00.147824 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 16 04:57:00.150559 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 16 04:57:00.442267 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:57:00.504920 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 16 04:57:00.508212 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:57:00.508212 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:57:00.508212 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:57:00.508212 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:57:00.508212 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:57:00.508212 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:57:00.508212 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:57:00.508212 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:57:00.534034 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:57:00.534034 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:57:00.534034 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:57:00.534034 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:57:00.534034 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:57:00.534034 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 16 04:57:01.009464 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 04:57:01.633999 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:57:01.633999 ignition[1084]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 04:57:01.665695 ignition[1084]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:57:01.680627 ignition[1084]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:57:01.680627 ignition[1084]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 04:57:01.695127 ignition[1084]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:57:01.695127 ignition[1084]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:57:01.695127 ignition[1084]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:57:01.695127 ignition[1084]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:57:01.695127 ignition[1084]: INFO : files: files passed Sep 16 04:57:01.695127 ignition[1084]: INFO : Ignition finished successfully Sep 16 04:57:01.682535 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:57:01.685108 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:57:01.687101 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:57:01.708403 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:57:01.708472 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:57:01.743975 initrd-setup-root-after-ignition[1113]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:01.743975 initrd-setup-root-after-ignition[1113]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:01.756899 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:01.747844 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:57:01.750330 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:57:01.752680 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:57:01.794301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:57:01.794389 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:57:01.797082 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:57:01.802344 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:57:01.804125 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:57:01.804772 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:57:01.822094 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:57:01.825297 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:57:01.841099 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:57:01.841262 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:57:01.841533 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:57:01.841912 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:57:01.842123 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:57:01.842955 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:57:01.843527 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:57:01.844172 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:57:01.844818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:57:01.845155 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:57:01.845472 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:57:01.846166 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:57:01.917366 ignition[1137]: INFO : Ignition 2.22.0 Sep 16 04:57:01.917366 ignition[1137]: INFO : Stage: umount Sep 16 04:57:01.846548 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:57:01.926439 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:01.926439 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:57:01.926439 ignition[1137]: INFO : umount: umount passed Sep 16 04:57:01.926439 ignition[1137]: INFO : Ignition finished successfully Sep 16 04:57:01.846946 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:57:01.847638 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:57:01.848199 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:57:01.869008 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:57:01.869148 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:57:01.871450 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:57:01.871789 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:57:01.872057 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:57:01.872634 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:57:01.872993 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:57:01.873088 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:57:01.873689 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:57:01.873781 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:57:01.874083 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:57:01.874166 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:57:01.874460 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 16 04:57:01.874539 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:57:01.878149 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:57:01.888555 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:57:01.889917 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:57:01.921255 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:57:01.923475 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:57:01.926143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:57:01.928523 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:57:01.928644 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:57:01.930943 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:57:01.931034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:57:01.940722 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:57:01.940792 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:57:01.946788 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:57:01.946855 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:57:01.952345 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:57:01.952388 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:57:01.958670 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:57:01.958706 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:57:01.962709 systemd[1]: Stopped target network.target - Network. Sep 16 04:57:01.967730 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:57:01.967772 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:57:01.973378 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:57:02.108946 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d41aacc eth0: Data path switched from VF: enP30832s1 Sep 16 04:57:02.109103 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 16 04:57:01.975977 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:57:01.977632 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:57:01.984640 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:57:01.989055 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:57:01.993065 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:57:01.993103 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:57:01.997053 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:57:01.997086 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:57:02.001048 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:57:02.001095 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:57:02.005053 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:57:02.005088 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:57:02.019158 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:57:02.024094 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:57:02.029207 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:57:02.029740 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:57:02.029827 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:57:02.032320 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:57:02.032500 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:57:02.032580 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:57:02.037968 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:57:02.038166 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:57:02.038230 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:57:02.042337 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:57:02.042586 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:57:02.042625 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:57:02.043210 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:57:02.043262 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:57:02.051659 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:57:02.055038 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:57:02.055102 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:57:02.057188 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:57:02.057243 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:57:02.060275 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:57:02.060319 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:57:02.068113 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:57:02.068158 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:57:02.074162 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:57:02.085377 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:57:02.085424 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:02.097797 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:57:02.099103 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:57:02.106344 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:57:02.106423 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:57:02.110313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:57:02.110352 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:57:02.111345 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:57:02.111370 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:57:02.111575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:57:02.111610 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:57:02.140035 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:57:02.140374 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:57:02.152623 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:57:02.152674 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:57:02.196634 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:57:02.198595 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:57:02.198646 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:57:02.204226 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:57:02.204275 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:57:02.213603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:02.213657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:02.221405 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:57:02.221446 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:57:02.221472 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:02.221719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:57:02.221789 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:57:02.224969 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:57:02.229670 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:57:02.262890 systemd[1]: Switching root. Sep 16 04:57:02.356198 systemd-journald[205]: Journal stopped Sep 16 04:57:09.492595 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 16 04:57:09.492634 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:57:09.492648 kernel: SELinux: policy capability open_perms=1 Sep 16 04:57:09.492659 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:57:09.492669 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:57:09.492678 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:57:09.492690 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:57:09.492703 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:57:09.492714 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:57:09.492725 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:57:09.492735 kernel: audit: type=1403 audit(1757998623.932:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:57:09.492748 systemd[1]: Successfully loaded SELinux policy in 198.800ms. Sep 16 04:57:09.492762 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.250ms. Sep 16 04:57:09.492776 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:57:09.492791 systemd[1]: Detected virtualization microsoft. Sep 16 04:57:09.492803 systemd[1]: Detected architecture x86-64. Sep 16 04:57:09.492816 systemd[1]: Detected first boot. Sep 16 04:57:09.492828 systemd[1]: Hostname set to . Sep 16 04:57:09.492844 systemd[1]: Initializing machine ID from random generator. Sep 16 04:57:09.492858 zram_generator::config[1180]: No configuration found. Sep 16 04:57:09.492870 kernel: Guest personality initialized and is inactive Sep 16 04:57:09.492883 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 16 04:57:09.492895 kernel: Initialized host personality Sep 16 04:57:09.492907 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:57:09.492919 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:57:09.492934 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:57:09.492945 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:57:09.492960 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:57:09.492971 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:57:09.492997 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:57:09.493012 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:57:09.493023 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:57:09.493034 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:57:09.493049 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:57:09.493062 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:57:09.493075 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:57:09.493087 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:57:09.493100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:57:09.493114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:57:09.493127 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:57:09.493141 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:57:09.493154 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:57:09.493166 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:57:09.493177 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:57:09.493185 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:57:09.493195 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:57:09.493204 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:57:09.493214 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:57:09.493225 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:57:09.493235 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:57:09.493245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:57:09.493255 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:57:09.493265 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:57:09.493275 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:57:09.493285 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:57:09.493295 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:57:09.493308 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:57:09.493318 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:57:09.493329 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:57:09.493339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:57:09.493349 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:57:09.493361 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:57:09.493372 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:57:09.493382 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:57:09.493393 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:09.493403 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:57:09.493414 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:57:09.493424 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:57:09.493434 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:57:09.493444 systemd[1]: Reached target machines.target - Containers. Sep 16 04:57:09.493457 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:57:09.493467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:09.493478 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:57:09.493488 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:57:09.493498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:57:09.493508 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:57:09.493518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:57:09.493528 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:57:09.493540 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:57:09.493551 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:57:09.493562 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:57:09.493573 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:57:09.493583 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:57:09.493593 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:57:09.493604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:09.493614 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:57:09.493626 kernel: fuse: init (API version 7.41) Sep 16 04:57:09.493636 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:57:09.493646 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:57:09.493656 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:57:09.493667 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:57:09.493677 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:57:09.493686 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:57:09.493697 systemd[1]: Stopped verity-setup.service. Sep 16 04:57:09.493707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:09.493720 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:57:09.493730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:57:09.493740 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:57:09.493766 systemd-journald[1263]: Collecting audit messages is disabled. Sep 16 04:57:09.493793 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:57:09.493804 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:57:09.493816 systemd-journald[1263]: Journal started Sep 16 04:57:09.493839 systemd-journald[1263]: Runtime Journal (/run/log/journal/3521569847f944739fe004ff0d607d88) is 8M, max 158.9M, 150.9M free. Sep 16 04:57:09.020299 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:57:09.499066 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:57:09.033572 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 16 04:57:09.033916 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:57:09.501847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:57:09.504306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:57:09.508691 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:57:09.508877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:57:09.513120 kernel: loop: module loaded Sep 16 04:57:09.514353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:57:09.514533 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:57:09.517168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:57:09.517339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:57:09.519205 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:57:09.519356 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:57:09.521180 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:57:09.521340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:57:09.524362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:57:09.527618 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:57:09.537234 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:57:09.541157 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:57:09.545144 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:57:09.548565 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:57:09.548594 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:57:09.553976 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:57:09.566223 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:57:09.571206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:09.587100 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:57:09.590931 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:57:09.594324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:57:09.599106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:57:09.601319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:57:09.603102 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:57:09.607573 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:57:09.612390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:57:09.616336 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:57:09.619229 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:57:09.622285 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:57:09.628288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:57:09.633247 systemd-journald[1263]: Time spent on flushing to /var/log/journal/3521569847f944739fe004ff0d607d88 is 41.628ms for 987 entries. Sep 16 04:57:09.633247 systemd-journald[1263]: System Journal (/var/log/journal/3521569847f944739fe004ff0d607d88) is 11.8M, max 2.6G, 2.6G free. Sep 16 04:57:09.761161 systemd-journald[1263]: Received client request to flush runtime journal. Sep 16 04:57:09.761216 kernel: loop0: detected capacity change from 0 to 27936 Sep 16 04:57:09.761238 kernel: ACPI: bus type drm_connector registered Sep 16 04:57:09.761253 systemd-journald[1263]: /var/log/journal/3521569847f944739fe004ff0d607d88/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 16 04:57:09.761278 systemd-journald[1263]: Rotating system journal. Sep 16 04:57:09.637398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:57:09.644120 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:57:09.646723 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:57:09.649514 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:57:09.655304 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:57:09.678774 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:57:09.678929 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:57:09.763179 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:57:09.822768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:57:09.961466 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:57:10.035166 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:57:10.170388 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:57:10.257005 kernel: loop1: detected capacity change from 0 to 224512 Sep 16 04:57:10.329007 kernel: loop2: detected capacity change from 0 to 128016 Sep 16 04:57:10.378754 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:57:10.381780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:57:10.581776 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 16 04:57:10.581792 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 16 04:57:10.584610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:57:10.849010 kernel: loop3: detected capacity change from 0 to 110984 Sep 16 04:57:11.361014 kernel: loop4: detected capacity change from 0 to 27936 Sep 16 04:57:11.374008 kernel: loop5: detected capacity change from 0 to 224512 Sep 16 04:57:11.396008 kernel: loop6: detected capacity change from 0 to 128016 Sep 16 04:57:11.406025 kernel: loop7: detected capacity change from 0 to 110984 Sep 16 04:57:11.415967 (sd-merge)[1346]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 16 04:57:11.416407 (sd-merge)[1346]: Merged extensions into '/usr'. Sep 16 04:57:11.421830 systemd[1]: Reload requested from client PID 1316 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:57:11.421845 systemd[1]: Reloading... Sep 16 04:57:11.469011 zram_generator::config[1372]: No configuration found. Sep 16 04:57:11.668103 systemd[1]: Reloading finished in 245 ms. Sep 16 04:57:11.686669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:57:11.691502 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:57:11.705756 systemd[1]: Starting ensure-sysext.service... Sep 16 04:57:11.710112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:57:11.714931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:57:11.749625 systemd[1]: Reload requested from client PID 1431 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:57:11.749716 systemd[1]: Reloading... Sep 16 04:57:11.754638 systemd-udevd[1433]: Using default interface naming scheme 'v255'. Sep 16 04:57:11.765684 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:57:11.765711 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:57:11.765977 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:57:11.766233 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:57:11.766971 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:57:11.767252 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Sep 16 04:57:11.767311 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Sep 16 04:57:11.799014 zram_generator::config[1458]: No configuration found. Sep 16 04:57:11.819493 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:57:11.819507 systemd-tmpfiles[1432]: Skipping /boot Sep 16 04:57:11.825510 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:57:11.825520 systemd-tmpfiles[1432]: Skipping /boot Sep 16 04:57:11.969353 systemd[1]: Reloading finished in 219 ms. Sep 16 04:57:11.998128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:57:12.006234 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:57:12.037167 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:57:12.041363 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:57:12.046357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:57:12.052089 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:57:12.057884 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:12.058558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:12.060404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:57:12.067851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:57:12.070362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:57:12.074191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:12.074319 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:12.074415 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:12.075378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:57:12.075524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:57:12.081401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:57:12.081651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:57:12.083708 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:57:12.083866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:57:12.090725 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 16 04:57:12.094174 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:12.094362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:12.095075 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:57:12.109318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:57:12.119344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:57:12.123726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:57:12.126238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:12.126354 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:12.126423 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:57:12.131086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:12.133296 systemd[1]: Finished ensure-sysext.service. Sep 16 04:57:12.135232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:57:12.135371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:57:12.137216 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:57:12.137415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:57:12.141238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:57:12.141352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:57:12.143831 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:57:12.144316 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:57:12.152022 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:57:12.152170 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:57:12.153249 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:57:12.191634 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:57:12.240885 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:57:12.328588 systemd-resolved[1524]: Positive Trust Anchors: Sep 16 04:57:12.328604 systemd-resolved[1524]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:57:12.328636 systemd-resolved[1524]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:57:12.375799 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:57:12.387360 augenrules[1568]: No rules Sep 16 04:57:12.388277 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:57:12.388458 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:57:12.390826 systemd-resolved[1524]: Using system hostname 'ci-4459.0.0-n-f9a9538521'. Sep 16 04:57:12.405582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:57:12.409106 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:57:12.504648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:57:12.508604 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:57:12.648525 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:57:12.700655 systemd-networkd[1578]: lo: Link UP Sep 16 04:57:12.701138 systemd-networkd[1578]: lo: Gained carrier Sep 16 04:57:12.702601 systemd-networkd[1578]: Enumeration completed Sep 16 04:57:12.702788 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:57:12.705699 systemd[1]: Reached target network.target - Network. Sep 16 04:57:12.711749 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:57:12.715943 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:57:12.726022 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:57:12.743107 kernel: hv_vmbus: registering driver hyperv_fb Sep 16 04:57:12.748015 kernel: hv_vmbus: registering driver hv_balloon Sep 16 04:57:12.762398 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 16 04:57:12.762456 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 16 04:57:12.760785 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:12.762678 systemd-networkd[1578]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:57:12.765175 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 16 04:57:12.765224 kernel: Console: switching to colour dummy device 80x25 Sep 16 04:57:12.766082 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 16 04:57:12.769017 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 16 04:57:12.771073 kernel: Console: switching to colour frame buffer device 128x48 Sep 16 04:57:12.772379 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d41aacc eth0: Data path switched to VF: enP30832s1 Sep 16 04:57:12.772552 systemd-networkd[1578]: enP30832s1: Link UP Sep 16 04:57:12.772636 systemd-networkd[1578]: eth0: Link UP Sep 16 04:57:12.772645 systemd-networkd[1578]: eth0: Gained carrier Sep 16 04:57:12.772665 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:12.777081 systemd-networkd[1578]: enP30832s1: Gained carrier Sep 16 04:57:12.782071 systemd-networkd[1578]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 16 04:57:12.794501 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:57:12.806471 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 16 04:57:12.812026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#42 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:57:12.870346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:12.887188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:12.887680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:12.892211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:12.928727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:12.929332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:12.933826 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:12.940156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:13.034301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 16 04:57:13.036685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:57:13.083029 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 16 04:57:13.120357 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:57:14.300659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:14.407949 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:57:14.412554 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:57:14.465079 systemd-networkd[1578]: eth0: Gained IPv6LL Sep 16 04:57:14.467101 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:57:14.472235 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:57:18.516257 ldconfig[1309]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:57:18.526087 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:57:18.530218 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:57:18.564163 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:57:18.567220 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:57:18.570145 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:57:18.571908 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:57:18.575038 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:57:18.576709 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:57:18.580100 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:57:18.583038 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:57:18.586030 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:57:18.586062 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:57:18.587156 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:57:18.628496 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:57:18.633014 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:57:18.636169 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:57:18.640160 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:57:18.641958 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:57:18.646417 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:57:18.650276 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:57:18.653550 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:57:18.656680 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:57:18.659038 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:57:18.660356 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:57:18.660381 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:57:18.674680 systemd[1]: Starting chronyd.service - NTP client/server... Sep 16 04:57:18.680082 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:57:18.686174 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:57:18.691040 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:57:18.695149 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:57:18.702140 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:57:18.708096 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:57:18.710438 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:57:18.715859 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:57:18.718316 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 16 04:57:18.719860 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 16 04:57:18.722411 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 16 04:57:18.724649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:18.729186 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:57:18.734780 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:57:18.739969 jq[1683]: false Sep 16 04:57:18.743122 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:57:18.748598 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:57:18.753104 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:57:18.759869 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:57:18.762623 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:57:18.765210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:57:18.766754 KVP[1686]: KVP starting; pid is:1686 Sep 16 04:57:18.769284 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Refreshing passwd entry cache Sep 16 04:57:18.768205 oslogin_cache_refresh[1685]: Refreshing passwd entry cache Sep 16 04:57:18.769750 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:57:18.777125 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:57:18.783859 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:57:18.786605 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:57:18.790528 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:57:18.791764 chronyd[1675]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 16 04:57:18.795625 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:57:18.796169 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:57:18.799073 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Failure getting users, quitting Sep 16 04:57:18.799073 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:57:18.799073 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Refreshing group entry cache Sep 16 04:57:18.798490 oslogin_cache_refresh[1685]: Failure getting users, quitting Sep 16 04:57:18.798505 oslogin_cache_refresh[1685]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:57:18.798546 oslogin_cache_refresh[1685]: Refreshing group entry cache Sep 16 04:57:18.807284 KVP[1686]: KVP LIC Version: 3.1 Sep 16 04:57:18.808171 kernel: hv_utils: KVP IC version 4.0 Sep 16 04:57:18.808689 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Failure getting groups, quitting Sep 16 04:57:18.808689 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:57:18.808686 oslogin_cache_refresh[1685]: Failure getting groups, quitting Sep 16 04:57:18.808696 oslogin_cache_refresh[1685]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:57:18.810322 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:57:18.810529 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:57:18.816269 extend-filesystems[1684]: Found /dev/nvme0n1p6 Sep 16 04:57:18.819948 jq[1695]: true Sep 16 04:57:18.836598 chronyd[1675]: Timezone right/UTC failed leap second check, ignoring Sep 16 04:57:18.836743 chronyd[1675]: Loaded seccomp filter (level 2) Sep 16 04:57:18.836813 systemd[1]: Started chronyd.service - NTP client/server. Sep 16 04:57:18.841518 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:57:18.843658 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:57:18.854409 extend-filesystems[1684]: Found /dev/nvme0n1p9 Sep 16 04:57:18.858275 jq[1715]: true Sep 16 04:57:18.858670 extend-filesystems[1684]: Checking size of /dev/nvme0n1p9 Sep 16 04:57:18.861023 (ntainerd)[1722]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:57:18.875403 update_engine[1694]: I20250916 04:57:18.872766 1694 main.cc:92] Flatcar Update Engine starting Sep 16 04:57:18.877556 tar[1704]: linux-amd64/LICENSE Sep 16 04:57:18.877556 tar[1704]: linux-amd64/helm Sep 16 04:57:18.881532 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:57:18.893287 extend-filesystems[1684]: Old size kept for /dev/nvme0n1p9 Sep 16 04:57:18.898722 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:57:18.899504 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:57:18.980838 systemd-logind[1693]: New seat seat0. Sep 16 04:57:18.983211 systemd-logind[1693]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:57:18.983359 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:57:19.004262 bash[1755]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:57:19.005211 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:57:19.008467 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:57:19.247181 dbus-daemon[1678]: [system] SELinux support is enabled Sep 16 04:57:19.250435 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:57:19.258198 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:57:19.258228 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:57:19.263104 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:57:19.263124 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:57:19.265942 update_engine[1694]: I20250916 04:57:19.265892 1694 update_check_scheduler.cc:74] Next update check in 2m26s Sep 16 04:57:19.269409 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:57:19.269749 dbus-daemon[1678]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 04:57:19.274170 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:57:19.423013 coreos-metadata[1677]: Sep 16 04:57:19.422 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 16 04:57:19.428113 coreos-metadata[1677]: Sep 16 04:57:19.427 INFO Fetch successful Sep 16 04:57:19.428113 coreos-metadata[1677]: Sep 16 04:57:19.428 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 16 04:57:19.432695 coreos-metadata[1677]: Sep 16 04:57:19.432 INFO Fetch successful Sep 16 04:57:19.434106 coreos-metadata[1677]: Sep 16 04:57:19.434 INFO Fetching http://168.63.129.16/machine/89192310-db70-49ca-acbc-1579038b8de9/e3e76ff7%2Dbaaa%2D4c4a%2Da14b%2D35e173a73bbf.%5Fci%2D4459.0.0%2Dn%2Df9a9538521?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 16 04:57:19.435772 coreos-metadata[1677]: Sep 16 04:57:19.435 INFO Fetch successful Sep 16 04:57:19.436405 coreos-metadata[1677]: Sep 16 04:57:19.436 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 16 04:57:19.451004 coreos-metadata[1677]: Sep 16 04:57:19.449 INFO Fetch successful Sep 16 04:57:19.462242 sshd_keygen[1730]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:57:19.494805 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:57:19.500380 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:57:19.509320 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 16 04:57:19.518097 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:57:19.520610 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:57:19.548610 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 16 04:57:19.551627 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:57:19.551837 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:57:19.557208 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:57:19.567746 tar[1704]: linux-amd64/README.md Sep 16 04:57:19.582038 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:57:19.594415 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:57:19.599934 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:57:19.609373 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:57:19.612204 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:57:19.620353 locksmithd[1785]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:57:19.857275 containerd[1722]: time="2025-09-16T04:57:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:57:19.858448 containerd[1722]: time="2025-09-16T04:57:19.858421860Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:57:19.868814 containerd[1722]: time="2025-09-16T04:57:19.868599238Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="59.26µs" Sep 16 04:57:19.868814 containerd[1722]: time="2025-09-16T04:57:19.868809444Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:57:19.868913 containerd[1722]: time="2025-09-16T04:57:19.868827940Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:57:19.868975 containerd[1722]: time="2025-09-16T04:57:19.868961123Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:57:19.869022 containerd[1722]: time="2025-09-16T04:57:19.868979919Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:57:19.869022 containerd[1722]: time="2025-09-16T04:57:19.869017334Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869081 containerd[1722]: time="2025-09-16T04:57:19.869066929Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869105 containerd[1722]: time="2025-09-16T04:57:19.869081936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869353 containerd[1722]: time="2025-09-16T04:57:19.869332250Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869383 containerd[1722]: time="2025-09-16T04:57:19.869353682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869383 containerd[1722]: time="2025-09-16T04:57:19.869373956Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869427 containerd[1722]: time="2025-09-16T04:57:19.869385699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869480 containerd[1722]: time="2025-09-16T04:57:19.869462244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869678 containerd[1722]: time="2025-09-16T04:57:19.869653868Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869714 containerd[1722]: time="2025-09-16T04:57:19.869682058Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:57:19.869714 containerd[1722]: time="2025-09-16T04:57:19.869693072Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:57:19.869752 containerd[1722]: time="2025-09-16T04:57:19.869733478Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:57:19.869971 containerd[1722]: time="2025-09-16T04:57:19.869958621Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:57:19.870722 containerd[1722]: time="2025-09-16T04:57:19.870703877Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:57:19.884140 containerd[1722]: time="2025-09-16T04:57:19.884076011Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:57:19.884305 containerd[1722]: time="2025-09-16T04:57:19.884232229Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:57:19.884305 containerd[1722]: time="2025-09-16T04:57:19.884250842Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:57:19.884305 containerd[1722]: time="2025-09-16T04:57:19.884261712Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:57:19.884305 containerd[1722]: time="2025-09-16T04:57:19.884279575Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:57:19.884305 containerd[1722]: time="2025-09-16T04:57:19.884291091Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:57:19.884576 containerd[1722]: time="2025-09-16T04:57:19.884432684Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:57:19.884576 containerd[1722]: time="2025-09-16T04:57:19.884447500Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:57:19.884576 containerd[1722]: time="2025-09-16T04:57:19.884459179Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:57:19.884576 containerd[1722]: time="2025-09-16T04:57:19.884469686Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:57:19.884576 containerd[1722]: time="2025-09-16T04:57:19.884478850Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:57:19.884576 containerd[1722]: time="2025-09-16T04:57:19.884508128Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:57:19.884771 containerd[1722]: time="2025-09-16T04:57:19.884761710Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:57:19.884881 containerd[1722]: time="2025-09-16T04:57:19.884805179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:57:19.884881 containerd[1722]: time="2025-09-16T04:57:19.884820387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:57:19.884881 containerd[1722]: time="2025-09-16T04:57:19.884835433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:57:19.884881 containerd[1722]: time="2025-09-16T04:57:19.884845878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:57:19.884997 containerd[1722]: time="2025-09-16T04:57:19.884857167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:57:19.884997 containerd[1722]: time="2025-09-16T04:57:19.884976608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:57:19.885138 containerd[1722]: time="2025-09-16T04:57:19.885072417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:57:19.885138 containerd[1722]: time="2025-09-16T04:57:19.885089383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:57:19.885138 containerd[1722]: time="2025-09-16T04:57:19.885105341Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:57:19.885138 containerd[1722]: time="2025-09-16T04:57:19.885116130Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:57:19.885292 containerd[1722]: time="2025-09-16T04:57:19.885281246Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:57:19.885345 containerd[1722]: time="2025-09-16T04:57:19.885338520Z" level=info msg="Start snapshots syncer" Sep 16 04:57:19.885536 containerd[1722]: time="2025-09-16T04:57:19.885397108Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:57:19.885728 containerd[1722]: time="2025-09-16T04:57:19.885697750Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:57:19.885933 containerd[1722]: time="2025-09-16T04:57:19.885884045Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:57:19.886082 containerd[1722]: time="2025-09-16T04:57:19.886030949Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:57:19.886201 containerd[1722]: time="2025-09-16T04:57:19.886191776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:57:19.886258 containerd[1722]: time="2025-09-16T04:57:19.886249953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:57:19.886311 containerd[1722]: time="2025-09-16T04:57:19.886293522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:57:19.886414 containerd[1722]: time="2025-09-16T04:57:19.886342097Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:57:19.886414 containerd[1722]: time="2025-09-16T04:57:19.886369872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:57:19.886414 containerd[1722]: time="2025-09-16T04:57:19.886382149Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:57:19.886414 containerd[1722]: time="2025-09-16T04:57:19.886397706Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:57:19.886527 containerd[1722]: time="2025-09-16T04:57:19.886519574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:57:19.886579 containerd[1722]: time="2025-09-16T04:57:19.886561915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:57:19.886643 containerd[1722]: time="2025-09-16T04:57:19.886609698Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:57:19.886750 containerd[1722]: time="2025-09-16T04:57:19.886697573Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:57:19.886750 containerd[1722]: time="2025-09-16T04:57:19.886717213Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:57:19.886750 containerd[1722]: time="2025-09-16T04:57:19.886726888Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:57:19.886894 containerd[1722]: time="2025-09-16T04:57:19.886736649Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:57:19.886894 containerd[1722]: time="2025-09-16T04:57:19.886857802Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:57:19.886894 containerd[1722]: time="2025-09-16T04:57:19.886868462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:57:19.886894 containerd[1722]: time="2025-09-16T04:57:19.886879664Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:57:19.887060 containerd[1722]: time="2025-09-16T04:57:19.887001754Z" level=info msg="runtime interface created" Sep 16 04:57:19.887060 containerd[1722]: time="2025-09-16T04:57:19.887016940Z" level=info msg="created NRI interface" Sep 16 04:57:19.887060 containerd[1722]: time="2025-09-16T04:57:19.887025637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:57:19.887060 containerd[1722]: time="2025-09-16T04:57:19.887038438Z" level=info msg="Connect containerd service" Sep 16 04:57:19.887198 containerd[1722]: time="2025-09-16T04:57:19.887165266Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:57:19.888108 containerd[1722]: time="2025-09-16T04:57:19.888085424Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:57:20.133063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:20.331917 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:20.774194 containerd[1722]: time="2025-09-16T04:57:20.774101821Z" level=info msg="Start subscribing containerd event" Sep 16 04:57:20.774339 containerd[1722]: time="2025-09-16T04:57:20.774208200Z" level=info msg="Start recovering state" Sep 16 04:57:20.774732 containerd[1722]: time="2025-09-16T04:57:20.774563091Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:57:20.774732 containerd[1722]: time="2025-09-16T04:57:20.774622210Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:57:20.776090 containerd[1722]: time="2025-09-16T04:57:20.776063075Z" level=info msg="Start event monitor" Sep 16 04:57:20.777465 containerd[1722]: time="2025-09-16T04:57:20.776180954Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:57:20.777465 containerd[1722]: time="2025-09-16T04:57:20.776197039Z" level=info msg="Start streaming server" Sep 16 04:57:20.777465 containerd[1722]: time="2025-09-16T04:57:20.776219042Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:57:20.777465 containerd[1722]: time="2025-09-16T04:57:20.776227995Z" level=info msg="runtime interface starting up..." Sep 16 04:57:20.777465 containerd[1722]: time="2025-09-16T04:57:20.776238682Z" level=info msg="starting plugins..." Sep 16 04:57:20.777465 containerd[1722]: time="2025-09-16T04:57:20.776252989Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:57:20.778607 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:57:20.779531 containerd[1722]: time="2025-09-16T04:57:20.779510426Z" level=info msg="containerd successfully booted in 0.922731s" Sep 16 04:57:20.783004 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:57:20.785386 systemd[1]: Startup finished in 3.387s (kernel) + 13.443s (initrd) + 17.050s (userspace) = 33.881s. Sep 16 04:57:20.845241 kubelet[1841]: E0916 04:57:20.845177 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:20.848617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:20.848745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:20.849179 systemd[1]: kubelet.service: Consumed 960ms CPU time, 265.2M memory peak. Sep 16 04:57:21.376507 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 16 04:57:21.379511 login[1827]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 16 04:57:21.386810 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:57:21.391222 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:57:21.400435 systemd-logind[1693]: New session 2 of user core. Sep 16 04:57:21.405280 systemd-logind[1693]: New session 1 of user core. Sep 16 04:57:21.421585 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:57:21.424057 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:57:21.444907 (systemd)[1860]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:57:21.446555 systemd-logind[1693]: New session c1 of user core. Sep 16 04:57:21.757442 waagent[1815]: 2025-09-16T04:57:21.757315Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 16 04:57:21.759860 waagent[1815]: 2025-09-16T04:57:21.759811Z INFO Daemon Daemon OS: flatcar 4459.0.0 Sep 16 04:57:21.762030 waagent[1815]: 2025-09-16T04:57:21.761981Z INFO Daemon Daemon Python: 3.11.13 Sep 16 04:57:21.763898 waagent[1815]: 2025-09-16T04:57:21.763857Z INFO Daemon Daemon Run daemon Sep 16 04:57:21.765775 waagent[1815]: 2025-09-16T04:57:21.765736Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.0.0' Sep 16 04:57:21.769679 waagent[1815]: 2025-09-16T04:57:21.769625Z INFO Daemon Daemon Using waagent for provisioning Sep 16 04:57:21.771775 waagent[1815]: 2025-09-16T04:57:21.771736Z INFO Daemon Daemon Activate resource disk Sep 16 04:57:21.773851 waagent[1815]: 2025-09-16T04:57:21.773811Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 16 04:57:21.778096 systemd[1860]: Queued start job for default target default.target. Sep 16 04:57:21.778797 waagent[1815]: 2025-09-16T04:57:21.778748Z INFO Daemon Daemon Found device: None Sep 16 04:57:21.779566 waagent[1815]: 2025-09-16T04:57:21.779527Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 16 04:57:21.782287 waagent[1815]: 2025-09-16T04:57:21.780518Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 16 04:57:21.782230 systemd[1860]: Created slice app.slice - User Application Slice. Sep 16 04:57:21.782257 systemd[1860]: Reached target paths.target - Paths. Sep 16 04:57:21.782470 systemd[1860]: Reached target timers.target - Timers. Sep 16 04:57:21.785398 systemd[1860]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:57:21.786375 waagent[1815]: 2025-09-16T04:57:21.786324Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 16 04:57:21.788952 waagent[1815]: 2025-09-16T04:57:21.788895Z INFO Daemon Daemon Running default provisioning handler Sep 16 04:57:21.795495 systemd[1860]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:57:21.796314 systemd[1860]: Reached target sockets.target - Sockets. Sep 16 04:57:21.796359 systemd[1860]: Reached target basic.target - Basic System. Sep 16 04:57:21.796385 systemd[1860]: Reached target default.target - Main User Target. Sep 16 04:57:21.796408 systemd[1860]: Startup finished in 345ms. Sep 16 04:57:21.796786 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:57:21.804290 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:57:21.805364 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:57:21.808310 waagent[1815]: 2025-09-16T04:57:21.807137Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 16 04:57:21.815491 waagent[1815]: 2025-09-16T04:57:21.815450Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 16 04:57:21.819440 waagent[1815]: 2025-09-16T04:57:21.819395Z INFO Daemon Daemon cloud-init is enabled: False Sep 16 04:57:21.819616 waagent[1815]: 2025-09-16T04:57:21.819590Z INFO Daemon Daemon Copying ovf-env.xml Sep 16 04:57:21.941377 waagent[1815]: 2025-09-16T04:57:21.941178Z INFO Daemon Daemon Successfully mounted dvd Sep 16 04:57:21.983921 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 16 04:57:21.986605 waagent[1815]: 2025-09-16T04:57:21.985158Z INFO Daemon Daemon Detect protocol endpoint Sep 16 04:57:21.986605 waagent[1815]: 2025-09-16T04:57:21.985363Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 16 04:57:21.986605 waagent[1815]: 2025-09-16T04:57:21.985603Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 16 04:57:21.986605 waagent[1815]: 2025-09-16T04:57:21.985904Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 16 04:57:21.986605 waagent[1815]: 2025-09-16T04:57:21.986078Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 16 04:57:21.986605 waagent[1815]: 2025-09-16T04:57:21.986195Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 16 04:57:21.998774 waagent[1815]: 2025-09-16T04:57:21.998728Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 16 04:57:22.000969 waagent[1815]: 2025-09-16T04:57:21.999042Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 16 04:57:22.000969 waagent[1815]: 2025-09-16T04:57:22.000188Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 16 04:57:22.111428 waagent[1815]: 2025-09-16T04:57:22.111327Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 16 04:57:22.112831 waagent[1815]: 2025-09-16T04:57:22.112792Z INFO Daemon Daemon Forcing an update of the goal state. Sep 16 04:57:22.118781 waagent[1815]: 2025-09-16T04:57:22.118747Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 16 04:57:22.133890 waagent[1815]: 2025-09-16T04:57:22.133857Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 16 04:57:22.135584 waagent[1815]: 2025-09-16T04:57:22.135549Z INFO Daemon Sep 16 04:57:22.136408 waagent[1815]: 2025-09-16T04:57:22.136332Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2ecfc55c-2310-4222-8549-6f8fa294b4de eTag: 14535115344979826650 source: Fabric] Sep 16 04:57:22.139561 waagent[1815]: 2025-09-16T04:57:22.139532Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 16 04:57:22.141378 waagent[1815]: 2025-09-16T04:57:22.141349Z INFO Daemon Sep 16 04:57:22.142221 waagent[1815]: 2025-09-16T04:57:22.141624Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 16 04:57:22.148709 waagent[1815]: 2025-09-16T04:57:22.148680Z INFO Daemon Daemon Downloading artifacts profile blob Sep 16 04:57:22.224802 waagent[1815]: 2025-09-16T04:57:22.224756Z INFO Daemon Downloaded certificate {'thumbprint': '1D41FBACDAA333BCB5EA6908567E06B0BDAF35D1', 'hasPrivateKey': True} Sep 16 04:57:22.227351 waagent[1815]: 2025-09-16T04:57:22.227311Z INFO Daemon Fetch goal state completed Sep 16 04:57:22.234997 waagent[1815]: 2025-09-16T04:57:22.234964Z INFO Daemon Daemon Starting provisioning Sep 16 04:57:22.235497 waagent[1815]: 2025-09-16T04:57:22.235414Z INFO Daemon Daemon Handle ovf-env.xml. Sep 16 04:57:22.237090 waagent[1815]: 2025-09-16T04:57:22.236256Z INFO Daemon Daemon Set hostname [ci-4459.0.0-n-f9a9538521] Sep 16 04:57:22.268380 waagent[1815]: 2025-09-16T04:57:22.268338Z INFO Daemon Daemon Publish hostname [ci-4459.0.0-n-f9a9538521] Sep 16 04:57:22.269918 waagent[1815]: 2025-09-16T04:57:22.269880Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 16 04:57:22.271334 waagent[1815]: 2025-09-16T04:57:22.271303Z INFO Daemon Daemon Primary interface is [eth0] Sep 16 04:57:22.293504 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:22.293511 systemd-networkd[1578]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:57:22.293533 systemd-networkd[1578]: eth0: DHCP lease lost Sep 16 04:57:22.294360 waagent[1815]: 2025-09-16T04:57:22.294313Z INFO Daemon Daemon Create user account if not exists Sep 16 04:57:22.294689 waagent[1815]: 2025-09-16T04:57:22.294529Z INFO Daemon Daemon User core already exists, skip useradd Sep 16 04:57:22.295045 waagent[1815]: 2025-09-16T04:57:22.294695Z INFO Daemon Daemon Configure sudoer Sep 16 04:57:22.299333 waagent[1815]: 2025-09-16T04:57:22.299290Z INFO Daemon Daemon Configure sshd Sep 16 04:57:22.304579 waagent[1815]: 2025-09-16T04:57:22.304538Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 16 04:57:22.304942 waagent[1815]: 2025-09-16T04:57:22.304857Z INFO Daemon Daemon Deploy ssh public key. Sep 16 04:57:22.309629 systemd-networkd[1578]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 16 04:57:23.461346 waagent[1815]: 2025-09-16T04:57:23.461302Z INFO Daemon Daemon Provisioning complete Sep 16 04:57:23.474007 waagent[1815]: 2025-09-16T04:57:23.473973Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 16 04:57:23.474851 waagent[1815]: 2025-09-16T04:57:23.474816Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 16 04:57:23.477638 waagent[1815]: 2025-09-16T04:57:23.476648Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 16 04:57:23.583430 waagent[1910]: 2025-09-16T04:57:23.583348Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 16 04:57:23.583775 waagent[1910]: 2025-09-16T04:57:23.583456Z INFO ExtHandler ExtHandler OS: flatcar 4459.0.0 Sep 16 04:57:23.583775 waagent[1910]: 2025-09-16T04:57:23.583500Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 16 04:57:23.583775 waagent[1910]: 2025-09-16T04:57:23.583539Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 16 04:57:23.634176 waagent[1910]: 2025-09-16T04:57:23.634118Z INFO ExtHandler ExtHandler Distro: flatcar-4459.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 16 04:57:23.634320 waagent[1910]: 2025-09-16T04:57:23.634280Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:57:23.634371 waagent[1910]: 2025-09-16T04:57:23.634349Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:57:23.640257 waagent[1910]: 2025-09-16T04:57:23.640207Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 16 04:57:23.647091 waagent[1910]: 2025-09-16T04:57:23.647060Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 16 04:57:23.647439 waagent[1910]: 2025-09-16T04:57:23.647410Z INFO ExtHandler Sep 16 04:57:23.647483 waagent[1910]: 2025-09-16T04:57:23.647464Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c5cdee0c-4b0d-42b8-94e2-c41681d98227 eTag: 14535115344979826650 source: Fabric] Sep 16 04:57:23.647687 waagent[1910]: 2025-09-16T04:57:23.647661Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 16 04:57:23.648067 waagent[1910]: 2025-09-16T04:57:23.648038Z INFO ExtHandler Sep 16 04:57:23.648105 waagent[1910]: 2025-09-16T04:57:23.648083Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 16 04:57:23.656424 waagent[1910]: 2025-09-16T04:57:23.656399Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 16 04:57:23.714225 waagent[1910]: 2025-09-16T04:57:23.714141Z INFO ExtHandler Downloaded certificate {'thumbprint': '1D41FBACDAA333BCB5EA6908567E06B0BDAF35D1', 'hasPrivateKey': True} Sep 16 04:57:23.714556 waagent[1910]: 2025-09-16T04:57:23.714526Z INFO ExtHandler Fetch goal state completed Sep 16 04:57:23.727173 waagent[1910]: 2025-09-16T04:57:23.727127Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Sep 16 04:57:23.731359 waagent[1910]: 2025-09-16T04:57:23.731313Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1910 Sep 16 04:57:23.731471 waagent[1910]: 2025-09-16T04:57:23.731447Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 16 04:57:23.731702 waagent[1910]: 2025-09-16T04:57:23.731680Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 16 04:57:23.732766 waagent[1910]: 2025-09-16T04:57:23.732733Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 16 04:57:23.733081 waagent[1910]: 2025-09-16T04:57:23.733049Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 16 04:57:23.733196 waagent[1910]: 2025-09-16T04:57:23.733172Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 16 04:57:23.733590 waagent[1910]: 2025-09-16T04:57:23.733562Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 16 04:57:23.811835 waagent[1910]: 2025-09-16T04:57:23.811805Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 16 04:57:23.811999 waagent[1910]: 2025-09-16T04:57:23.811965Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 16 04:57:23.817691 waagent[1910]: 2025-09-16T04:57:23.817316Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 16 04:57:23.822575 systemd[1]: Reload requested from client PID 1925 ('systemctl') (unit waagent.service)... Sep 16 04:57:23.822613 systemd[1]: Reloading... Sep 16 04:57:23.893019 zram_generator::config[1967]: No configuration found. Sep 16 04:57:24.003318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#50 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 16 04:57:24.076363 systemd[1]: Reloading finished in 253 ms. Sep 16 04:57:24.088347 waagent[1910]: 2025-09-16T04:57:24.087550Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 16 04:57:24.088347 waagent[1910]: 2025-09-16T04:57:24.087696Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 16 04:57:24.493281 waagent[1910]: 2025-09-16T04:57:24.493207Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 16 04:57:24.493545 waagent[1910]: 2025-09-16T04:57:24.493517Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 16 04:57:24.494254 waagent[1910]: 2025-09-16T04:57:24.494191Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 16 04:57:24.494598 waagent[1910]: 2025-09-16T04:57:24.494572Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 16 04:57:24.494655 waagent[1910]: 2025-09-16T04:57:24.494623Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:57:24.494699 waagent[1910]: 2025-09-16T04:57:24.494679Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:57:24.494873 waagent[1910]: 2025-09-16T04:57:24.494852Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 16 04:57:24.495063 waagent[1910]: 2025-09-16T04:57:24.495014Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:57:24.495214 waagent[1910]: 2025-09-16T04:57:24.495193Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 16 04:57:24.495214 waagent[1910]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 16 04:57:24.495214 waagent[1910]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 16 04:57:24.495214 waagent[1910]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 16 04:57:24.495214 waagent[1910]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:57:24.495214 waagent[1910]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:57:24.495214 waagent[1910]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:57:24.495498 waagent[1910]: 2025-09-16T04:57:24.495460Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:57:24.495764 waagent[1910]: 2025-09-16T04:57:24.495739Z INFO EnvHandler ExtHandler Configure routes Sep 16 04:57:24.495832 waagent[1910]: 2025-09-16T04:57:24.495798Z INFO EnvHandler ExtHandler Gateway:None Sep 16 04:57:24.495885 waagent[1910]: 2025-09-16T04:57:24.495855Z INFO EnvHandler ExtHandler Routes:None Sep 16 04:57:24.496161 waagent[1910]: 2025-09-16T04:57:24.496128Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 16 04:57:24.496371 waagent[1910]: 2025-09-16T04:57:24.496213Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 16 04:57:24.496456 waagent[1910]: 2025-09-16T04:57:24.496412Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 16 04:57:24.496518 waagent[1910]: 2025-09-16T04:57:24.496501Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 16 04:57:24.496620 waagent[1910]: 2025-09-16T04:57:24.496598Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 16 04:57:24.502492 waagent[1910]: 2025-09-16T04:57:24.502455Z INFO ExtHandler ExtHandler Sep 16 04:57:24.502560 waagent[1910]: 2025-09-16T04:57:24.502517Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6552e4e2-df75-48d1-80b6-36f36cc1319b correlation de3b0eda-a19a-4365-9e96-ec32cc6553bd created: 2025-09-16T04:56:04.435753Z] Sep 16 04:57:24.502818 waagent[1910]: 2025-09-16T04:57:24.502792Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 16 04:57:24.503213 waagent[1910]: 2025-09-16T04:57:24.503190Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 16 04:57:24.530074 waagent[1910]: 2025-09-16T04:57:24.530027Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 16 04:57:24.530074 waagent[1910]: Try `iptables -h' or 'iptables --help' for more information.) Sep 16 04:57:24.530423 waagent[1910]: 2025-09-16T04:57:24.530396Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0914F8FA-D01D-4C62-80AA-BDAC2C6D91D8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 16 04:57:24.600841 waagent[1910]: 2025-09-16T04:57:24.600801Z INFO MonitorHandler ExtHandler Network interfaces: Sep 16 04:57:24.600841 waagent[1910]: Executing ['ip', '-a', '-o', 'link']: Sep 16 04:57:24.600841 waagent[1910]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 16 04:57:24.600841 waagent[1910]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:41:aa:cc brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 16 04:57:24.600841 waagent[1910]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:41:aa:cc brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 16 04:57:24.600841 waagent[1910]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 16 04:57:24.600841 waagent[1910]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 16 04:57:24.600841 waagent[1910]: 2: eth0 inet 10.200.8.40/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 16 04:57:24.600841 waagent[1910]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 16 04:57:24.600841 waagent[1910]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 16 04:57:24.600841 waagent[1910]: 2: eth0 inet6 fe80::7eed:8dff:fe41:aacc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 16 04:57:24.642371 waagent[1910]: 2025-09-16T04:57:24.642320Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 16 04:57:24.642371 waagent[1910]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:24.642371 waagent[1910]: pkts bytes target prot opt in out source destination Sep 16 04:57:24.642371 waagent[1910]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:24.642371 waagent[1910]: pkts bytes target prot opt in out source destination Sep 16 04:57:24.642371 waagent[1910]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Sep 16 04:57:24.642371 waagent[1910]: pkts bytes target prot opt in out source destination Sep 16 04:57:24.642371 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 16 04:57:24.642371 waagent[1910]: 7 940 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 16 04:57:24.642371 waagent[1910]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 16 04:57:24.645514 waagent[1910]: 2025-09-16T04:57:24.645465Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 16 04:57:24.645514 waagent[1910]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:24.645514 waagent[1910]: pkts bytes target prot opt in out source destination Sep 16 04:57:24.645514 waagent[1910]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:24.645514 waagent[1910]: pkts bytes target prot opt in out source destination Sep 16 04:57:24.645514 waagent[1910]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Sep 16 04:57:24.645514 waagent[1910]: pkts bytes target prot opt in out source destination Sep 16 04:57:24.645514 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 16 04:57:24.645514 waagent[1910]: 9 1052 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 16 04:57:24.645514 waagent[1910]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 16 04:57:30.866686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:57:30.868150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:31.299023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:31.308194 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:31.341522 kubelet[2062]: E0916 04:57:31.341441 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:31.344426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:31.344558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:31.344862 systemd[1]: kubelet.service: Consumed 129ms CPU time, 110.5M memory peak. Sep 16 04:57:41.366767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:57:41.368287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:41.851025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:41.857197 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:41.899328 kubelet[2077]: E0916 04:57:41.899303 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:41.900981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:41.901228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:41.901648 systemd[1]: kubelet.service: Consumed 127ms CPU time, 108M memory peak. Sep 16 04:57:42.625058 chronyd[1675]: Selected source PHC0 Sep 16 04:57:51.336456 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:57:51.337555 systemd[1]: Started sshd@0-10.200.8.40:22-10.200.16.10:51462.service - OpenSSH per-connection server daemon (10.200.16.10:51462). Sep 16 04:57:51.966445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 16 04:57:51.968043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:52.074844 sshd[2085]: Accepted publickey for core from 10.200.16.10 port 51462 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:52.075922 sshd-session[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:52.080331 systemd-logind[1693]: New session 3 of user core. Sep 16 04:57:52.086147 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:57:52.586093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:52.592249 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:52.623884 systemd[1]: Started sshd@1-10.200.8.40:22-10.200.16.10:51478.service - OpenSSH per-connection server daemon (10.200.16.10:51478). Sep 16 04:57:52.628736 kubelet[2098]: E0916 04:57:52.628713 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:52.630435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:52.630537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:52.635368 systemd[1]: kubelet.service: Consumed 127ms CPU time, 109.7M memory peak. Sep 16 04:57:53.254023 sshd[2106]: Accepted publickey for core from 10.200.16.10 port 51478 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:53.255223 sshd-session[2106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:53.259796 systemd-logind[1693]: New session 4 of user core. Sep 16 04:57:53.271127 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:57:53.695838 sshd[2110]: Connection closed by 10.200.16.10 port 51478 Sep 16 04:57:53.696427 sshd-session[2106]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:53.699770 systemd[1]: sshd@1-10.200.8.40:22-10.200.16.10:51478.service: Deactivated successfully. Sep 16 04:57:53.701393 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:57:53.702074 systemd-logind[1693]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:57:53.703170 systemd-logind[1693]: Removed session 4. Sep 16 04:57:53.809683 systemd[1]: Started sshd@2-10.200.8.40:22-10.200.16.10:51482.service - OpenSSH per-connection server daemon (10.200.16.10:51482). Sep 16 04:57:54.435310 sshd[2116]: Accepted publickey for core from 10.200.16.10 port 51482 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:54.444343 sshd-session[2116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:54.448838 systemd-logind[1693]: New session 5 of user core. Sep 16 04:57:54.454153 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:57:54.880772 sshd[2119]: Connection closed by 10.200.16.10 port 51482 Sep 16 04:57:54.881313 sshd-session[2116]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:54.884530 systemd[1]: sshd@2-10.200.8.40:22-10.200.16.10:51482.service: Deactivated successfully. Sep 16 04:57:54.886160 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:57:54.886857 systemd-logind[1693]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:57:54.888070 systemd-logind[1693]: Removed session 5. Sep 16 04:57:54.994558 systemd[1]: Started sshd@3-10.200.8.40:22-10.200.16.10:51484.service - OpenSSH per-connection server daemon (10.200.16.10:51484). Sep 16 04:57:55.626069 sshd[2125]: Accepted publickey for core from 10.200.16.10 port 51484 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:55.627148 sshd-session[2125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:55.631752 systemd-logind[1693]: New session 6 of user core. Sep 16 04:57:55.637153 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:57:56.069510 sshd[2128]: Connection closed by 10.200.16.10 port 51484 Sep 16 04:57:56.070028 sshd-session[2125]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:56.072823 systemd[1]: sshd@3-10.200.8.40:22-10.200.16.10:51484.service: Deactivated successfully. Sep 16 04:57:56.074459 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:57:56.076370 systemd-logind[1693]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:57:56.077316 systemd-logind[1693]: Removed session 6. Sep 16 04:57:56.183677 systemd[1]: Started sshd@4-10.200.8.40:22-10.200.16.10:51492.service - OpenSSH per-connection server daemon (10.200.16.10:51492). Sep 16 04:57:56.814965 sshd[2134]: Accepted publickey for core from 10.200.16.10 port 51492 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:56.816131 sshd-session[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:56.820754 systemd-logind[1693]: New session 7 of user core. Sep 16 04:57:56.826151 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:57:57.367722 sudo[2138]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:57:57.367952 sudo[2138]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:57:57.391838 sudo[2138]: pam_unix(sudo:session): session closed for user root Sep 16 04:57:57.491674 sshd[2137]: Connection closed by 10.200.16.10 port 51492 Sep 16 04:57:57.492341 sshd-session[2134]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:57.496076 systemd[1]: sshd@4-10.200.8.40:22-10.200.16.10:51492.service: Deactivated successfully. Sep 16 04:57:57.497541 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:57:57.498329 systemd-logind[1693]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:57:57.499540 systemd-logind[1693]: Removed session 7. Sep 16 04:57:57.602715 systemd[1]: Started sshd@5-10.200.8.40:22-10.200.16.10:51498.service - OpenSSH per-connection server daemon (10.200.16.10:51498). Sep 16 04:57:58.234414 sshd[2144]: Accepted publickey for core from 10.200.16.10 port 51498 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:58.235560 sshd-session[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:58.240166 systemd-logind[1693]: New session 8 of user core. Sep 16 04:57:58.250119 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:57:58.577370 sudo[2149]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:57:58.577779 sudo[2149]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:57:58.589132 sudo[2149]: pam_unix(sudo:session): session closed for user root Sep 16 04:57:58.593140 sudo[2148]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:57:58.593357 sudo[2148]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:57:58.601158 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:57:58.632609 augenrules[2171]: No rules Sep 16 04:57:58.633595 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:57:58.633795 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:57:58.634513 sudo[2148]: pam_unix(sudo:session): session closed for user root Sep 16 04:57:58.737072 sshd[2147]: Connection closed by 10.200.16.10 port 51498 Sep 16 04:57:58.737572 sshd-session[2144]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:58.741023 systemd[1]: sshd@5-10.200.8.40:22-10.200.16.10:51498.service: Deactivated successfully. Sep 16 04:57:58.742523 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:57:58.743336 systemd-logind[1693]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:57:58.744380 systemd-logind[1693]: Removed session 8. Sep 16 04:57:58.847936 systemd[1]: Started sshd@6-10.200.8.40:22-10.200.16.10:51512.service - OpenSSH per-connection server daemon (10.200.16.10:51512). Sep 16 04:57:59.478784 sshd[2180]: Accepted publickey for core from 10.200.16.10 port 51512 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:59.479861 sshd-session[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:59.484071 systemd-logind[1693]: New session 9 of user core. Sep 16 04:57:59.494144 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:57:59.820815 sudo[2184]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:57:59.821059 sudo[2184]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:00.908680 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 16 04:58:01.405114 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:58:01.414325 (dockerd)[2201]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:58:02.771975 dockerd[2201]: time="2025-09-16T04:58:02.771920303Z" level=info msg="Starting up" Sep 16 04:58:02.775764 dockerd[2201]: time="2025-09-16T04:58:02.775684420Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:58:02.776861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 16 04:58:02.778041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:02.788836 dockerd[2201]: time="2025-09-16T04:58:02.788804285Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:58:03.163089 systemd[1]: var-lib-docker-metacopy\x2dcheck840277787-merged.mount: Deactivated successfully. Sep 16 04:58:03.212247 dockerd[2201]: time="2025-09-16T04:58:03.212200131Z" level=info msg="Loading containers: start." Sep 16 04:58:03.365014 kernel: Initializing XFRM netlink socket Sep 16 04:58:03.730120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:03.739412 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:03.778186 kubelet[2315]: E0916 04:58:03.778133 2315 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:03.779672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:03.779805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:03.780113 systemd[1]: kubelet.service: Consumed 136ms CPU time, 110.3M memory peak. Sep 16 04:58:03.887522 systemd-networkd[1578]: docker0: Link UP Sep 16 04:58:03.905043 dockerd[2201]: time="2025-09-16T04:58:03.905008050Z" level=info msg="Loading containers: done." Sep 16 04:58:03.916685 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1314377063-merged.mount: Deactivated successfully. Sep 16 04:58:04.001122 dockerd[2201]: time="2025-09-16T04:58:04.001016856Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:58:04.001122 dockerd[2201]: time="2025-09-16T04:58:04.001117598Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:58:04.001271 dockerd[2201]: time="2025-09-16T04:58:04.001209565Z" level=info msg="Initializing buildkit" Sep 16 04:58:04.044111 dockerd[2201]: time="2025-09-16T04:58:04.044073549Z" level=info msg="Completed buildkit initialization" Sep 16 04:58:04.051324 dockerd[2201]: time="2025-09-16T04:58:04.051288385Z" level=info msg="Daemon has completed initialization" Sep 16 04:58:04.051671 dockerd[2201]: time="2025-09-16T04:58:04.051429952Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:58:04.051538 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:58:04.905628 update_engine[1694]: I20250916 04:58:04.905555 1694 update_attempter.cc:509] Updating boot flags... Sep 16 04:58:05.421743 containerd[1722]: time="2025-09-16T04:58:05.421701520Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 16 04:58:06.205496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777437036.mount: Deactivated successfully. Sep 16 04:58:07.430002 containerd[1722]: time="2025-09-16T04:58:07.429952509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:07.432281 containerd[1722]: time="2025-09-16T04:58:07.432241764Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Sep 16 04:58:07.435653 containerd[1722]: time="2025-09-16T04:58:07.435594829Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:07.439841 containerd[1722]: time="2025-09-16T04:58:07.439797936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:07.440608 containerd[1722]: time="2025-09-16T04:58:07.440464576Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.018726526s" Sep 16 04:58:07.440608 containerd[1722]: time="2025-09-16T04:58:07.440496869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 16 04:58:07.441252 containerd[1722]: time="2025-09-16T04:58:07.441229125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 16 04:58:08.766092 containerd[1722]: time="2025-09-16T04:58:08.766044288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:08.768595 containerd[1722]: time="2025-09-16T04:58:08.768560400Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Sep 16 04:58:08.771268 containerd[1722]: time="2025-09-16T04:58:08.771239715Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:08.775188 containerd[1722]: time="2025-09-16T04:58:08.775127733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:08.776131 containerd[1722]: time="2025-09-16T04:58:08.775803120Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.334545363s" Sep 16 04:58:08.776131 containerd[1722]: time="2025-09-16T04:58:08.775835269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 16 04:58:08.776694 containerd[1722]: time="2025-09-16T04:58:08.776661190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 16 04:58:09.907955 containerd[1722]: time="2025-09-16T04:58:09.907913023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:09.911250 containerd[1722]: time="2025-09-16T04:58:09.911065327Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Sep 16 04:58:09.914364 containerd[1722]: time="2025-09-16T04:58:09.914340816Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:09.918400 containerd[1722]: time="2025-09-16T04:58:09.918370013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:09.919177 containerd[1722]: time="2025-09-16T04:58:09.919027765Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.14233675s" Sep 16 04:58:09.919177 containerd[1722]: time="2025-09-16T04:58:09.919058897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 16 04:58:09.919593 containerd[1722]: time="2025-09-16T04:58:09.919561596Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 16 04:58:10.801647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190182877.mount: Deactivated successfully. Sep 16 04:58:11.165453 containerd[1722]: time="2025-09-16T04:58:11.165402527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:11.168025 containerd[1722]: time="2025-09-16T04:58:11.168000368Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Sep 16 04:58:11.178010 containerd[1722]: time="2025-09-16T04:58:11.177567016Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:11.181422 containerd[1722]: time="2025-09-16T04:58:11.181381794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:11.182007 containerd[1722]: time="2025-09-16T04:58:11.181758083Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.262168735s" Sep 16 04:58:11.182007 containerd[1722]: time="2025-09-16T04:58:11.181790307Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 16 04:58:11.182417 containerd[1722]: time="2025-09-16T04:58:11.182395901Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:58:11.725742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2086308453.mount: Deactivated successfully. Sep 16 04:58:12.660655 containerd[1722]: time="2025-09-16T04:58:12.660608278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:12.663768 containerd[1722]: time="2025-09-16T04:58:12.663582661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 16 04:58:12.666264 containerd[1722]: time="2025-09-16T04:58:12.666230224Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:12.669684 containerd[1722]: time="2025-09-16T04:58:12.669652048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:12.671131 containerd[1722]: time="2025-09-16T04:58:12.671093309Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.488669603s" Sep 16 04:58:12.671442 containerd[1722]: time="2025-09-16T04:58:12.671231858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 04:58:12.671688 containerd[1722]: time="2025-09-16T04:58:12.671650932Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:58:13.227434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1957696372.mount: Deactivated successfully. Sep 16 04:58:13.244091 containerd[1722]: time="2025-09-16T04:58:13.244052091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:13.246599 containerd[1722]: time="2025-09-16T04:58:13.246566046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 16 04:58:13.249590 containerd[1722]: time="2025-09-16T04:58:13.249552341Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:13.252871 containerd[1722]: time="2025-09-16T04:58:13.252831491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:13.253630 containerd[1722]: time="2025-09-16T04:58:13.253272478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 581.590514ms" Sep 16 04:58:13.253630 containerd[1722]: time="2025-09-16T04:58:13.253301014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:58:13.253883 containerd[1722]: time="2025-09-16T04:58:13.253863753Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 16 04:58:13.793564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 16 04:58:13.795352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:13.816394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224337919.mount: Deactivated successfully. Sep 16 04:58:14.248014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:14.251280 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:14.285530 kubelet[2585]: E0916 04:58:14.285497 2585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:14.287223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:14.287350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:14.287624 systemd[1]: kubelet.service: Consumed 133ms CPU time, 110.4M memory peak. Sep 16 04:58:16.128267 containerd[1722]: time="2025-09-16T04:58:16.128218741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:16.131636 containerd[1722]: time="2025-09-16T04:58:16.131599402Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 16 04:58:16.134795 containerd[1722]: time="2025-09-16T04:58:16.134754028Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:16.138521 containerd[1722]: time="2025-09-16T04:58:16.138295059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:16.139013 containerd[1722]: time="2025-09-16T04:58:16.138979854Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.885092085s" Sep 16 04:58:16.139055 containerd[1722]: time="2025-09-16T04:58:16.139021750Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 16 04:58:18.965862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:18.966038 systemd[1]: kubelet.service: Consumed 133ms CPU time, 110.4M memory peak. Sep 16 04:58:18.968082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:18.990656 systemd[1]: Reload requested from client PID 2668 ('systemctl') (unit session-9.scope)... Sep 16 04:58:18.990669 systemd[1]: Reloading... Sep 16 04:58:19.059955 zram_generator::config[2711]: No configuration found. Sep 16 04:58:19.269662 systemd[1]: Reloading finished in 278 ms. Sep 16 04:58:19.370660 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:58:19.370750 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:58:19.371031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:19.371092 systemd[1]: kubelet.service: Consumed 82ms CPU time, 83.2M memory peak. Sep 16 04:58:19.372545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:19.997190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:20.001045 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:58:20.043007 kubelet[2782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:20.043007 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:58:20.043007 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:20.043007 kubelet[2782]: I0916 04:58:20.042172 2782 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:58:20.270042 kubelet[2782]: I0916 04:58:20.269725 2782 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:58:20.270042 kubelet[2782]: I0916 04:58:20.269760 2782 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:58:20.270270 kubelet[2782]: I0916 04:58:20.270212 2782 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:58:20.299971 kubelet[2782]: I0916 04:58:20.299941 2782 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:58:20.300232 kubelet[2782]: E0916 04:58:20.300205 2782 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:20.308296 kubelet[2782]: I0916 04:58:20.308270 2782 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:58:20.310846 kubelet[2782]: I0916 04:58:20.310820 2782 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:58:20.311047 kubelet[2782]: I0916 04:58:20.311017 2782 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:58:20.311189 kubelet[2782]: I0916 04:58:20.311046 2782 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-n-f9a9538521","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:58:20.311311 kubelet[2782]: I0916 04:58:20.311194 2782 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:58:20.311311 kubelet[2782]: I0916 04:58:20.311204 2782 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:58:20.311311 kubelet[2782]: I0916 04:58:20.311307 2782 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:20.314321 kubelet[2782]: I0916 04:58:20.314305 2782 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:58:20.314394 kubelet[2782]: I0916 04:58:20.314331 2782 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:58:20.314394 kubelet[2782]: I0916 04:58:20.314353 2782 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:58:20.314394 kubelet[2782]: I0916 04:58:20.314364 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:58:20.317428 kubelet[2782]: W0916 04:58:20.317373 2782 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Sep 16 04:58:20.317500 kubelet[2782]: E0916 04:58:20.317443 2782 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:20.317529 kubelet[2782]: W0916 04:58:20.317507 2782 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-f9a9538521&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Sep 16 04:58:20.317551 kubelet[2782]: E0916 04:58:20.317533 2782 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-f9a9538521&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:20.317891 kubelet[2782]: I0916 04:58:20.317875 2782 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:58:20.318276 kubelet[2782]: I0916 04:58:20.318264 2782 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:58:20.319899 kubelet[2782]: W0916 04:58:20.319873 2782 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:58:20.322358 kubelet[2782]: I0916 04:58:20.322337 2782 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:58:20.322416 kubelet[2782]: I0916 04:58:20.322368 2782 server.go:1287] "Started kubelet" Sep 16 04:58:20.324637 kubelet[2782]: I0916 04:58:20.324614 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:58:20.327028 kubelet[2782]: I0916 04:58:20.326307 2782 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:58:20.327028 kubelet[2782]: I0916 04:58:20.326539 2782 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:58:20.329175 kubelet[2782]: I0916 04:58:20.329154 2782 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:58:20.332174 kubelet[2782]: I0916 04:58:20.332126 2782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:58:20.332348 kubelet[2782]: I0916 04:58:20.332335 2782 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:58:20.332794 kubelet[2782]: I0916 04:58:20.332770 2782 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:58:20.334009 kubelet[2782]: E0916 04:58:20.333775 2782 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-f9a9538521\" not found" Sep 16 04:58:20.338068 kubelet[2782]: I0916 04:58:20.338054 2782 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:58:20.338212 kubelet[2782]: I0916 04:58:20.338204 2782 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:58:20.338768 kubelet[2782]: E0916 04:58:20.338736 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-f9a9538521?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="200ms" Sep 16 04:58:20.340839 kubelet[2782]: I0916 04:58:20.340807 2782 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:58:20.340909 kubelet[2782]: I0916 04:58:20.340877 2782 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:58:20.342936 kubelet[2782]: I0916 04:58:20.342916 2782 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:58:20.347922 kubelet[2782]: W0916 04:58:20.347894 2782 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Sep 16 04:58:20.348055 kubelet[2782]: E0916 04:58:20.347925 2782 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:20.348055 kubelet[2782]: E0916 04:58:20.348043 2782 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:58:20.349092 kubelet[2782]: E0916 04:58:20.348099 2782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.0.0-n-f9a9538521.1865aa80f04fe194 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.0.0-n-f9a9538521,UID:ci-4459.0.0-n-f9a9538521,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.0.0-n-f9a9538521,},FirstTimestamp:2025-09-16 04:58:20.32234946 +0000 UTC m=+0.318033638,LastTimestamp:2025-09-16 04:58:20.32234946 +0000 UTC m=+0.318033638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.0.0-n-f9a9538521,}" Sep 16 04:58:20.369738 kubelet[2782]: I0916 04:58:20.369718 2782 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:58:20.369738 kubelet[2782]: I0916 04:58:20.369731 2782 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:58:20.369830 kubelet[2782]: I0916 04:58:20.369747 2782 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:20.375444 kubelet[2782]: I0916 04:58:20.375427 2782 policy_none.go:49] "None policy: Start" Sep 16 04:58:20.375444 kubelet[2782]: I0916 04:58:20.375445 2782 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:58:20.375520 kubelet[2782]: I0916 04:58:20.375455 2782 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:58:20.383360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:58:20.392785 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:58:20.395585 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:58:20.401715 kubelet[2782]: I0916 04:58:20.401662 2782 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:58:20.402095 kubelet[2782]: I0916 04:58:20.401905 2782 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:58:20.402095 kubelet[2782]: I0916 04:58:20.401917 2782 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:58:20.402184 kubelet[2782]: I0916 04:58:20.402151 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:58:20.403301 kubelet[2782]: E0916 04:58:20.403288 2782 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:58:20.403431 kubelet[2782]: E0916 04:58:20.403401 2782 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.0.0-n-f9a9538521\" not found" Sep 16 04:58:20.407758 kubelet[2782]: I0916 04:58:20.407735 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:58:20.409028 kubelet[2782]: I0916 04:58:20.409010 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:58:20.409028 kubelet[2782]: I0916 04:58:20.409031 2782 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:58:20.409197 kubelet[2782]: I0916 04:58:20.409048 2782 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:58:20.409197 kubelet[2782]: I0916 04:58:20.409055 2782 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:58:20.409197 kubelet[2782]: E0916 04:58:20.409093 2782 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 16 04:58:20.410322 kubelet[2782]: W0916 04:58:20.410303 2782 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Sep 16 04:58:20.410693 kubelet[2782]: E0916 04:58:20.410670 2782 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:20.499719 kubelet[2782]: E0916 04:58:20.499611 2782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.0.0-n-f9a9538521.1865aa80f04fe194 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.0.0-n-f9a9538521,UID:ci-4459.0.0-n-f9a9538521,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.0.0-n-f9a9538521,},FirstTimestamp:2025-09-16 04:58:20.32234946 +0000 UTC m=+0.318033638,LastTimestamp:2025-09-16 04:58:20.32234946 +0000 UTC m=+0.318033638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.0.0-n-f9a9538521,}" Sep 16 04:58:20.504048 kubelet[2782]: I0916 04:58:20.504019 2782 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.504325 kubelet[2782]: E0916 04:58:20.504307 2782 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.517533 systemd[1]: Created slice kubepods-burstable-poddaf639147ce72a73ea9567824d9612bf.slice - libcontainer container kubepods-burstable-poddaf639147ce72a73ea9567824d9612bf.slice. Sep 16 04:58:20.525666 kubelet[2782]: E0916 04:58:20.525587 2782 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.528735 systemd[1]: Created slice kubepods-burstable-podac0285fb5bfbf4ac6e3e334048df951b.slice - libcontainer container kubepods-burstable-podac0285fb5bfbf4ac6e3e334048df951b.slice. Sep 16 04:58:20.535961 kubelet[2782]: E0916 04:58:20.535942 2782 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.538227 systemd[1]: Created slice kubepods-burstable-pod58b2ed02d0cbfdbd8069b67108bd1e8d.slice - libcontainer container kubepods-burstable-pod58b2ed02d0cbfdbd8069b67108bd1e8d.slice. Sep 16 04:58:20.539278 kubelet[2782]: I0916 04:58:20.539261 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539747 kubelet[2782]: I0916 04:58:20.539393 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539747 kubelet[2782]: I0916 04:58:20.539433 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539747 kubelet[2782]: I0916 04:58:20.539468 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58b2ed02d0cbfdbd8069b67108bd1e8d-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" (UID: \"58b2ed02d0cbfdbd8069b67108bd1e8d\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539747 kubelet[2782]: I0916 04:58:20.539488 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58b2ed02d0cbfdbd8069b67108bd1e8d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" (UID: \"58b2ed02d0cbfdbd8069b67108bd1e8d\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539747 kubelet[2782]: I0916 04:58:20.539514 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539907 kubelet[2782]: I0916 04:58:20.539532 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539907 kubelet[2782]: I0916 04:58:20.539551 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac0285fb5bfbf4ac6e3e334048df951b-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-n-f9a9538521\" (UID: \"ac0285fb5bfbf4ac6e3e334048df951b\") " pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539907 kubelet[2782]: I0916 04:58:20.539568 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58b2ed02d0cbfdbd8069b67108bd1e8d-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" (UID: \"58b2ed02d0cbfdbd8069b67108bd1e8d\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.539907 kubelet[2782]: E0916 04:58:20.539719 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-f9a9538521?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="400ms" Sep 16 04:58:20.540691 kubelet[2782]: E0916 04:58:20.540658 2782 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.706774 kubelet[2782]: I0916 04:58:20.706742 2782 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.707137 kubelet[2782]: E0916 04:58:20.707117 2782 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:20.828068 containerd[1722]: time="2025-09-16T04:58:20.827941236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-n-f9a9538521,Uid:daf639147ce72a73ea9567824d9612bf,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:20.837578 containerd[1722]: time="2025-09-16T04:58:20.837532505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-n-f9a9538521,Uid:ac0285fb5bfbf4ac6e3e334048df951b,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:20.841339 containerd[1722]: time="2025-09-16T04:58:20.841310205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-n-f9a9538521,Uid:58b2ed02d0cbfdbd8069b67108bd1e8d,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:20.895704 containerd[1722]: time="2025-09-16T04:58:20.894944631Z" level=info msg="connecting to shim e4e8427ded45c2e0660a9ef6d7adb714c8a137c44e828419ce56566061205525" address="unix:///run/containerd/s/91b0f8cc42a9e74e37c60e5913aa048efda57a84400a8b3ce9e3d0e2e0be3724" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:20.919231 systemd[1]: Started cri-containerd-e4e8427ded45c2e0660a9ef6d7adb714c8a137c44e828419ce56566061205525.scope - libcontainer container e4e8427ded45c2e0660a9ef6d7adb714c8a137c44e828419ce56566061205525. Sep 16 04:58:20.940138 containerd[1722]: time="2025-09-16T04:58:20.940093645Z" level=info msg="connecting to shim b128c2cd82f7b5134c7e17dfa90016950f5d6acf02da373f668c9c0afd90536e" address="unix:///run/containerd/s/d86925ba155f1eb0a12fc3ad1fb2fb779b1aa5e12ba3215e40da98c0ee0f3d43" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:20.941061 kubelet[2782]: E0916 04:58:20.940964 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-f9a9538521?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="800ms" Sep 16 04:58:20.961871 containerd[1722]: time="2025-09-16T04:58:20.961837268Z" level=info msg="connecting to shim f4fbbe348ef836a834d01cebef3f5645542047d9f3497e84bb078e62f050409d" address="unix:///run/containerd/s/713d46e51b16809d58d7508d0d328cd0cef61f705b3a6856d360d88851b25144" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:20.964224 systemd[1]: Started cri-containerd-b128c2cd82f7b5134c7e17dfa90016950f5d6acf02da373f668c9c0afd90536e.scope - libcontainer container b128c2cd82f7b5134c7e17dfa90016950f5d6acf02da373f668c9c0afd90536e. Sep 16 04:58:20.990295 systemd[1]: Started cri-containerd-f4fbbe348ef836a834d01cebef3f5645542047d9f3497e84bb078e62f050409d.scope - libcontainer container f4fbbe348ef836a834d01cebef3f5645542047d9f3497e84bb078e62f050409d. Sep 16 04:58:21.012380 containerd[1722]: time="2025-09-16T04:58:21.012353247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-n-f9a9538521,Uid:daf639147ce72a73ea9567824d9612bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4e8427ded45c2e0660a9ef6d7adb714c8a137c44e828419ce56566061205525\"" Sep 16 04:58:21.015864 containerd[1722]: time="2025-09-16T04:58:21.015683974Z" level=info msg="CreateContainer within sandbox \"e4e8427ded45c2e0660a9ef6d7adb714c8a137c44e828419ce56566061205525\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:58:21.046588 containerd[1722]: time="2025-09-16T04:58:21.045092795Z" level=info msg="Container c2042ad5aa0e78d243d0c904bfbca3f5e3489f21b0a169a57e770f6749582fae: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:21.047217 containerd[1722]: time="2025-09-16T04:58:21.047188351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-n-f9a9538521,Uid:58b2ed02d0cbfdbd8069b67108bd1e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b128c2cd82f7b5134c7e17dfa90016950f5d6acf02da373f668c9c0afd90536e\"" Sep 16 04:58:21.051122 containerd[1722]: time="2025-09-16T04:58:21.051095285Z" level=info msg="CreateContainer within sandbox \"b128c2cd82f7b5134c7e17dfa90016950f5d6acf02da373f668c9c0afd90536e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:58:21.067156 containerd[1722]: time="2025-09-16T04:58:21.067130037Z" level=info msg="CreateContainer within sandbox \"e4e8427ded45c2e0660a9ef6d7adb714c8a137c44e828419ce56566061205525\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c2042ad5aa0e78d243d0c904bfbca3f5e3489f21b0a169a57e770f6749582fae\"" Sep 16 04:58:21.067607 containerd[1722]: time="2025-09-16T04:58:21.067588427Z" level=info msg="StartContainer for \"c2042ad5aa0e78d243d0c904bfbca3f5e3489f21b0a169a57e770f6749582fae\"" Sep 16 04:58:21.068589 containerd[1722]: time="2025-09-16T04:58:21.068566358Z" level=info msg="connecting to shim c2042ad5aa0e78d243d0c904bfbca3f5e3489f21b0a169a57e770f6749582fae" address="unix:///run/containerd/s/91b0f8cc42a9e74e37c60e5913aa048efda57a84400a8b3ce9e3d0e2e0be3724" protocol=ttrpc version=3 Sep 16 04:58:21.069940 containerd[1722]: time="2025-09-16T04:58:21.069916857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-n-f9a9538521,Uid:ac0285fb5bfbf4ac6e3e334048df951b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4fbbe348ef836a834d01cebef3f5645542047d9f3497e84bb078e62f050409d\"" Sep 16 04:58:21.072627 containerd[1722]: time="2025-09-16T04:58:21.072221378Z" level=info msg="CreateContainer within sandbox \"f4fbbe348ef836a834d01cebef3f5645542047d9f3497e84bb078e62f050409d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:58:21.086113 systemd[1]: Started cri-containerd-c2042ad5aa0e78d243d0c904bfbca3f5e3489f21b0a169a57e770f6749582fae.scope - libcontainer container c2042ad5aa0e78d243d0c904bfbca3f5e3489f21b0a169a57e770f6749582fae. Sep 16 04:58:21.108669 kubelet[2782]: I0916 04:58:21.108347 2782 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:21.108669 kubelet[2782]: E0916 04:58:21.108649 2782 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:21.119401 containerd[1722]: time="2025-09-16T04:58:21.119348222Z" level=info msg="Container 9544b33157161795eda08e521167f4e2798dc2abd1080d312ddf84993917c023: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:21.125431 containerd[1722]: time="2025-09-16T04:58:21.125337858Z" level=info msg="Container 75f0df0bc4bfc9c3e450616fb0bc7f96d6a051952b168adf236b94bb29cd371e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:21.134848 containerd[1722]: time="2025-09-16T04:58:21.134787470Z" level=info msg="StartContainer for \"c2042ad5aa0e78d243d0c904bfbca3f5e3489f21b0a169a57e770f6749582fae\" returns successfully" Sep 16 04:58:21.147256 containerd[1722]: time="2025-09-16T04:58:21.147185840Z" level=info msg="CreateContainer within sandbox \"b128c2cd82f7b5134c7e17dfa90016950f5d6acf02da373f668c9c0afd90536e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9544b33157161795eda08e521167f4e2798dc2abd1080d312ddf84993917c023\"" Sep 16 04:58:21.147664 containerd[1722]: time="2025-09-16T04:58:21.147572941Z" level=info msg="StartContainer for \"9544b33157161795eda08e521167f4e2798dc2abd1080d312ddf84993917c023\"" Sep 16 04:58:21.149095 containerd[1722]: time="2025-09-16T04:58:21.149070701Z" level=info msg="connecting to shim 9544b33157161795eda08e521167f4e2798dc2abd1080d312ddf84993917c023" address="unix:///run/containerd/s/d86925ba155f1eb0a12fc3ad1fb2fb779b1aa5e12ba3215e40da98c0ee0f3d43" protocol=ttrpc version=3 Sep 16 04:58:21.153920 containerd[1722]: time="2025-09-16T04:58:21.153885121Z" level=info msg="CreateContainer within sandbox \"f4fbbe348ef836a834d01cebef3f5645542047d9f3497e84bb078e62f050409d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"75f0df0bc4bfc9c3e450616fb0bc7f96d6a051952b168adf236b94bb29cd371e\"" Sep 16 04:58:21.154261 containerd[1722]: time="2025-09-16T04:58:21.154242920Z" level=info msg="StartContainer for \"75f0df0bc4bfc9c3e450616fb0bc7f96d6a051952b168adf236b94bb29cd371e\"" Sep 16 04:58:21.156176 containerd[1722]: time="2025-09-16T04:58:21.156143486Z" level=info msg="connecting to shim 75f0df0bc4bfc9c3e450616fb0bc7f96d6a051952b168adf236b94bb29cd371e" address="unix:///run/containerd/s/713d46e51b16809d58d7508d0d328cd0cef61f705b3a6856d360d88851b25144" protocol=ttrpc version=3 Sep 16 04:58:21.170282 systemd[1]: Started cri-containerd-9544b33157161795eda08e521167f4e2798dc2abd1080d312ddf84993917c023.scope - libcontainer container 9544b33157161795eda08e521167f4e2798dc2abd1080d312ddf84993917c023. Sep 16 04:58:21.195236 systemd[1]: Started cri-containerd-75f0df0bc4bfc9c3e450616fb0bc7f96d6a051952b168adf236b94bb29cd371e.scope - libcontainer container 75f0df0bc4bfc9c3e450616fb0bc7f96d6a051952b168adf236b94bb29cd371e. Sep 16 04:58:21.231261 kubelet[2782]: W0916 04:58:21.231175 2782 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Sep 16 04:58:21.231261 kubelet[2782]: E0916 04:58:21.231238 2782 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:21.273180 containerd[1722]: time="2025-09-16T04:58:21.273138767Z" level=info msg="StartContainer for \"9544b33157161795eda08e521167f4e2798dc2abd1080d312ddf84993917c023\" returns successfully" Sep 16 04:58:21.357217 containerd[1722]: time="2025-09-16T04:58:21.357133928Z" level=info msg="StartContainer for \"75f0df0bc4bfc9c3e450616fb0bc7f96d6a051952b168adf236b94bb29cd371e\" returns successfully" Sep 16 04:58:21.419867 kubelet[2782]: E0916 04:58:21.419844 2782 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:21.423894 kubelet[2782]: E0916 04:58:21.423874 2782 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:21.426793 kubelet[2782]: E0916 04:58:21.426774 2782 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:21.911467 kubelet[2782]: I0916 04:58:21.911435 2782 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:22.429661 kubelet[2782]: E0916 04:58:22.429629 2782 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:22.430367 kubelet[2782]: E0916 04:58:22.430349 2782 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:22.958194 kubelet[2782]: E0916 04:58:22.958151 2782 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.0.0-n-f9a9538521\" not found" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.058146 kubelet[2782]: I0916 04:58:23.057948 2782 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.058146 kubelet[2782]: E0916 04:58:23.057982 2782 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.0.0-n-f9a9538521\": node \"ci-4459.0.0-n-f9a9538521\" not found" Sep 16 04:58:23.134553 kubelet[2782]: I0916 04:58:23.134512 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.138505 kubelet[2782]: E0916 04:58:23.138467 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.138505 kubelet[2782]: I0916 04:58:23.138491 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.141783 kubelet[2782]: E0916 04:58:23.141378 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-n-f9a9538521\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.141783 kubelet[2782]: I0916 04:58:23.141415 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.143261 kubelet[2782]: E0916 04:58:23.143210 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.318608 kubelet[2782]: I0916 04:58:23.318515 2782 apiserver.go:52] "Watching apiserver" Sep 16 04:58:23.338617 kubelet[2782]: I0916 04:58:23.338588 2782 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:58:23.428975 kubelet[2782]: I0916 04:58:23.428952 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.430796 kubelet[2782]: E0916 04:58:23.430766 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-n-f9a9538521\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.675440 kubelet[2782]: I0916 04:58:23.675403 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:23.677352 kubelet[2782]: E0916 04:58:23.677318 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:24.953937 systemd[1]: Reload requested from client PID 3054 ('systemctl') (unit session-9.scope)... Sep 16 04:58:24.953951 systemd[1]: Reloading... Sep 16 04:58:25.015017 zram_generator::config[3097]: No configuration found. Sep 16 04:58:25.227731 systemd[1]: Reloading finished in 273 ms. Sep 16 04:58:25.264064 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:25.279138 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:58:25.279361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:25.279412 systemd[1]: kubelet.service: Consumed 632ms CPU time, 128.7M memory peak. Sep 16 04:58:25.280843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:25.685110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:25.695256 (kubelet)[3168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:58:25.740725 kubelet[3168]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:25.740725 kubelet[3168]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:58:25.740725 kubelet[3168]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:25.740725 kubelet[3168]: I0916 04:58:25.740646 3168 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:58:25.748146 kubelet[3168]: I0916 04:58:25.748121 3168 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:58:25.749021 kubelet[3168]: I0916 04:58:25.748255 3168 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:58:25.749021 kubelet[3168]: I0916 04:58:25.748810 3168 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:58:25.750190 kubelet[3168]: I0916 04:58:25.750170 3168 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:58:25.753752 kubelet[3168]: I0916 04:58:25.753724 3168 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:58:25.756932 kubelet[3168]: I0916 04:58:25.756919 3168 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:58:25.760181 kubelet[3168]: I0916 04:58:25.760128 3168 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:58:25.760749 kubelet[3168]: I0916 04:58:25.760463 3168 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:58:25.760749 kubelet[3168]: I0916 04:58:25.760491 3168 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-n-f9a9538521","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:58:25.760749 kubelet[3168]: I0916 04:58:25.760669 3168 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:58:25.760749 kubelet[3168]: I0916 04:58:25.760677 3168 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:58:25.760910 kubelet[3168]: I0916 04:58:25.760726 3168 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:25.761047 kubelet[3168]: I0916 04:58:25.761041 3168 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:58:25.761765 kubelet[3168]: I0916 04:58:25.761753 3168 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:58:25.761857 kubelet[3168]: I0916 04:58:25.761851 3168 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:58:25.761900 kubelet[3168]: I0916 04:58:25.761895 3168 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:58:25.769754 kubelet[3168]: I0916 04:58:25.768185 3168 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:58:25.769754 kubelet[3168]: I0916 04:58:25.768541 3168 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:58:25.769754 kubelet[3168]: I0916 04:58:25.768971 3168 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:58:25.771572 kubelet[3168]: I0916 04:58:25.771558 3168 server.go:1287] "Started kubelet" Sep 16 04:58:25.774409 kubelet[3168]: I0916 04:58:25.774396 3168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:58:25.783075 kubelet[3168]: I0916 04:58:25.782686 3168 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:58:25.785057 kubelet[3168]: I0916 04:58:25.785042 3168 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:58:25.785147 kubelet[3168]: I0916 04:58:25.785078 3168 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:58:25.785359 kubelet[3168]: I0916 04:58:25.785349 3168 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:58:25.785429 kubelet[3168]: I0916 04:58:25.785352 3168 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:58:25.787969 kubelet[3168]: I0916 04:58:25.787953 3168 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:58:25.789052 kubelet[3168]: I0916 04:58:25.788970 3168 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:58:25.790141 kubelet[3168]: I0916 04:58:25.790128 3168 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:58:25.790865 kubelet[3168]: I0916 04:58:25.790821 3168 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:58:25.791019 kubelet[3168]: I0916 04:58:25.790977 3168 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:58:25.791582 kubelet[3168]: I0916 04:58:25.791557 3168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:58:25.792687 kubelet[3168]: I0916 04:58:25.792653 3168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:58:25.792757 kubelet[3168]: I0916 04:58:25.792698 3168 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:58:25.792757 kubelet[3168]: I0916 04:58:25.792715 3168 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:58:25.792757 kubelet[3168]: I0916 04:58:25.792721 3168 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:58:25.792831 kubelet[3168]: E0916 04:58:25.792774 3168 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:58:25.797249 kubelet[3168]: E0916 04:58:25.797226 3168 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:58:25.798979 kubelet[3168]: I0916 04:58:25.798871 3168 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:58:25.841695 kubelet[3168]: I0916 04:58:25.841681 3168 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:58:25.842011 kubelet[3168]: I0916 04:58:25.841798 3168 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:58:25.842011 kubelet[3168]: I0916 04:58:25.841813 3168 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:25.842011 kubelet[3168]: I0916 04:58:25.841923 3168 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:58:25.842011 kubelet[3168]: I0916 04:58:25.841929 3168 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:58:25.842011 kubelet[3168]: I0916 04:58:25.841942 3168 policy_none.go:49] "None policy: Start" Sep 16 04:58:25.842011 kubelet[3168]: I0916 04:58:25.841949 3168 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:58:25.842011 kubelet[3168]: I0916 04:58:25.841955 3168 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:58:25.842219 kubelet[3168]: I0916 04:58:25.842212 3168 state_mem.go:75] "Updated machine memory state" Sep 16 04:58:25.845143 kubelet[3168]: I0916 04:58:25.845129 3168 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:58:25.845581 kubelet[3168]: I0916 04:58:25.845571 3168 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:58:25.845669 kubelet[3168]: I0916 04:58:25.845645 3168 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:58:25.845864 kubelet[3168]: I0916 04:58:25.845854 3168 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:58:25.850001 kubelet[3168]: E0916 04:58:25.848378 3168 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:58:25.894175 kubelet[3168]: I0916 04:58:25.894153 3168 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.895840 kubelet[3168]: I0916 04:58:25.895818 3168 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.896526 kubelet[3168]: I0916 04:58:25.896172 3168 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.900971 kubelet[3168]: W0916 04:58:25.900953 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:58:25.905251 kubelet[3168]: W0916 04:58:25.905179 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:58:25.905356 kubelet[3168]: W0916 04:58:25.905339 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:58:25.953008 kubelet[3168]: I0916 04:58:25.952836 3168 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.964111 kubelet[3168]: I0916 04:58:25.964039 3168 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.964245 kubelet[3168]: I0916 04:58:25.964095 3168 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991285 kubelet[3168]: I0916 04:58:25.991185 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac0285fb5bfbf4ac6e3e334048df951b-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-n-f9a9538521\" (UID: \"ac0285fb5bfbf4ac6e3e334048df951b\") " pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991285 kubelet[3168]: I0916 04:58:25.991220 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58b2ed02d0cbfdbd8069b67108bd1e8d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" (UID: \"58b2ed02d0cbfdbd8069b67108bd1e8d\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991285 kubelet[3168]: I0916 04:58:25.991249 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991285 kubelet[3168]: I0916 04:58:25.991269 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58b2ed02d0cbfdbd8069b67108bd1e8d-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" (UID: \"58b2ed02d0cbfdbd8069b67108bd1e8d\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991285 kubelet[3168]: I0916 04:58:25.991289 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58b2ed02d0cbfdbd8069b67108bd1e8d-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" (UID: \"58b2ed02d0cbfdbd8069b67108bd1e8d\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991468 kubelet[3168]: I0916 04:58:25.991307 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991468 kubelet[3168]: I0916 04:58:25.991324 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991468 kubelet[3168]: I0916 04:58:25.991344 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:25.991468 kubelet[3168]: I0916 04:58:25.991363 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daf639147ce72a73ea9567824d9612bf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-n-f9a9538521\" (UID: \"daf639147ce72a73ea9567824d9612bf\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:26.763774 kubelet[3168]: I0916 04:58:26.763535 3168 apiserver.go:52] "Watching apiserver" Sep 16 04:58:26.785921 kubelet[3168]: I0916 04:58:26.785879 3168 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:58:26.826344 kubelet[3168]: I0916 04:58:26.826319 3168 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:26.827263 kubelet[3168]: I0916 04:58:26.827239 3168 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:26.837115 kubelet[3168]: W0916 04:58:26.837092 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:58:26.837215 kubelet[3168]: E0916 04:58:26.837174 3168 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-n-f9a9538521\" already exists" pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:26.840228 kubelet[3168]: W0916 04:58:26.840206 3168 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:58:26.840304 kubelet[3168]: E0916 04:58:26.840274 3168 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-n-f9a9538521\" already exists" pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" Sep 16 04:58:26.852430 kubelet[3168]: I0916 04:58:26.852384 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.0.0-n-f9a9538521" podStartSLOduration=1.852373558 podStartE2EDuration="1.852373558s" podCreationTimestamp="2025-09-16 04:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:26.85215741 +0000 UTC m=+1.153338540" watchObservedRunningTime="2025-09-16 04:58:26.852373558 +0000 UTC m=+1.153554672" Sep 16 04:58:26.873397 kubelet[3168]: I0916 04:58:26.873288 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-f9a9538521" podStartSLOduration=1.873271984 podStartE2EDuration="1.873271984s" podCreationTimestamp="2025-09-16 04:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:26.863829099 +0000 UTC m=+1.165010233" watchObservedRunningTime="2025-09-16 04:58:26.873271984 +0000 UTC m=+1.174453113" Sep 16 04:58:26.885429 kubelet[3168]: I0916 04:58:26.885378 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.0.0-n-f9a9538521" podStartSLOduration=1.885363202 podStartE2EDuration="1.885363202s" podCreationTimestamp="2025-09-16 04:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:26.873595641 +0000 UTC m=+1.174776775" watchObservedRunningTime="2025-09-16 04:58:26.885363202 +0000 UTC m=+1.186544332" Sep 16 04:58:30.702442 kubelet[3168]: I0916 04:58:30.702411 3168 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:58:30.702982 containerd[1722]: time="2025-09-16T04:58:30.702944729Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:58:30.703266 kubelet[3168]: I0916 04:58:30.703203 3168 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:58:31.477296 systemd[1]: Created slice kubepods-besteffort-podfe0d8f5a_00cd_4a96_b01f_aaa4fea5932a.slice - libcontainer container kubepods-besteffort-podfe0d8f5a_00cd_4a96_b01f_aaa4fea5932a.slice. Sep 16 04:58:31.524546 kubelet[3168]: I0916 04:58:31.524439 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a-xtables-lock\") pod \"kube-proxy-t85f6\" (UID: \"fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a\") " pod="kube-system/kube-proxy-t85f6" Sep 16 04:58:31.524546 kubelet[3168]: I0916 04:58:31.524468 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a-lib-modules\") pod \"kube-proxy-t85f6\" (UID: \"fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a\") " pod="kube-system/kube-proxy-t85f6" Sep 16 04:58:31.524546 kubelet[3168]: I0916 04:58:31.524488 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf5kc\" (UniqueName: \"kubernetes.io/projected/fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a-kube-api-access-tf5kc\") pod \"kube-proxy-t85f6\" (UID: \"fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a\") " pod="kube-system/kube-proxy-t85f6" Sep 16 04:58:31.524546 kubelet[3168]: I0916 04:58:31.524502 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a-kube-proxy\") pod \"kube-proxy-t85f6\" (UID: \"fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a\") " pod="kube-system/kube-proxy-t85f6" Sep 16 04:58:31.785412 containerd[1722]: time="2025-09-16T04:58:31.785307524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t85f6,Uid:fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:31.837660 containerd[1722]: time="2025-09-16T04:58:31.837620989Z" level=info msg="connecting to shim d0b9b4259852a216a8384ebd2cbdacd8cae0a6c585ae94592f19651c81bf8b3a" address="unix:///run/containerd/s/4745bf0a3d77418335ef27960cd4ac1529c97b7e6dc2692c09ab6b4468a71662" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:31.864199 systemd[1]: Started cri-containerd-d0b9b4259852a216a8384ebd2cbdacd8cae0a6c585ae94592f19651c81bf8b3a.scope - libcontainer container d0b9b4259852a216a8384ebd2cbdacd8cae0a6c585ae94592f19651c81bf8b3a. Sep 16 04:58:31.901004 containerd[1722]: time="2025-09-16T04:58:31.900832911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t85f6,Uid:fe0d8f5a-00cd-4a96-b01f-aaa4fea5932a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0b9b4259852a216a8384ebd2cbdacd8cae0a6c585ae94592f19651c81bf8b3a\"" Sep 16 04:58:31.905589 containerd[1722]: time="2025-09-16T04:58:31.905557493Z" level=info msg="CreateContainer within sandbox \"d0b9b4259852a216a8384ebd2cbdacd8cae0a6c585ae94592f19651c81bf8b3a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:58:31.915206 systemd[1]: Created slice kubepods-besteffort-pod4ab298b6_82fd_49bc_8e12_8abe60d3a709.slice - libcontainer container kubepods-besteffort-pod4ab298b6_82fd_49bc_8e12_8abe60d3a709.slice. Sep 16 04:58:31.927270 kubelet[3168]: I0916 04:58:31.926909 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxlkk\" (UniqueName: \"kubernetes.io/projected/4ab298b6-82fd-49bc-8e12-8abe60d3a709-kube-api-access-kxlkk\") pod \"tigera-operator-755d956888-rhv7v\" (UID: \"4ab298b6-82fd-49bc-8e12-8abe60d3a709\") " pod="tigera-operator/tigera-operator-755d956888-rhv7v" Sep 16 04:58:31.927270 kubelet[3168]: I0916 04:58:31.926950 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ab298b6-82fd-49bc-8e12-8abe60d3a709-var-lib-calico\") pod \"tigera-operator-755d956888-rhv7v\" (UID: \"4ab298b6-82fd-49bc-8e12-8abe60d3a709\") " pod="tigera-operator/tigera-operator-755d956888-rhv7v" Sep 16 04:58:31.931553 containerd[1722]: time="2025-09-16T04:58:31.931489284Z" level=info msg="Container 6b302d20a22eb6cc99b76e1c93a750f55bbf186f0919067273f96d6c58c1d9fa: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:31.933759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169600289.mount: Deactivated successfully. Sep 16 04:58:31.949299 containerd[1722]: time="2025-09-16T04:58:31.949277966Z" level=info msg="CreateContainer within sandbox \"d0b9b4259852a216a8384ebd2cbdacd8cae0a6c585ae94592f19651c81bf8b3a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b302d20a22eb6cc99b76e1c93a750f55bbf186f0919067273f96d6c58c1d9fa\"" Sep 16 04:58:31.949744 containerd[1722]: time="2025-09-16T04:58:31.949722762Z" level=info msg="StartContainer for \"6b302d20a22eb6cc99b76e1c93a750f55bbf186f0919067273f96d6c58c1d9fa\"" Sep 16 04:58:31.950844 containerd[1722]: time="2025-09-16T04:58:31.950814891Z" level=info msg="connecting to shim 6b302d20a22eb6cc99b76e1c93a750f55bbf186f0919067273f96d6c58c1d9fa" address="unix:///run/containerd/s/4745bf0a3d77418335ef27960cd4ac1529c97b7e6dc2692c09ab6b4468a71662" protocol=ttrpc version=3 Sep 16 04:58:31.969117 systemd[1]: Started cri-containerd-6b302d20a22eb6cc99b76e1c93a750f55bbf186f0919067273f96d6c58c1d9fa.scope - libcontainer container 6b302d20a22eb6cc99b76e1c93a750f55bbf186f0919067273f96d6c58c1d9fa. Sep 16 04:58:32.000353 containerd[1722]: time="2025-09-16T04:58:32.000271817Z" level=info msg="StartContainer for \"6b302d20a22eb6cc99b76e1c93a750f55bbf186f0919067273f96d6c58c1d9fa\" returns successfully" Sep 16 04:58:32.220709 containerd[1722]: time="2025-09-16T04:58:32.220678370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rhv7v,Uid:4ab298b6-82fd-49bc-8e12-8abe60d3a709,Namespace:tigera-operator,Attempt:0,}" Sep 16 04:58:32.254364 containerd[1722]: time="2025-09-16T04:58:32.254324933Z" level=info msg="connecting to shim 0dabeb9658f92f027c5fa4b81d967a802446bc797204030e1c2b6314de357b97" address="unix:///run/containerd/s/72e4c9f1aef183125d636c933b910429c6d0e23c45695a3080329f1d93182b03" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:32.279192 systemd[1]: Started cri-containerd-0dabeb9658f92f027c5fa4b81d967a802446bc797204030e1c2b6314de357b97.scope - libcontainer container 0dabeb9658f92f027c5fa4b81d967a802446bc797204030e1c2b6314de357b97. Sep 16 04:58:32.321614 containerd[1722]: time="2025-09-16T04:58:32.321576379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rhv7v,Uid:4ab298b6-82fd-49bc-8e12-8abe60d3a709,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0dabeb9658f92f027c5fa4b81d967a802446bc797204030e1c2b6314de357b97\"" Sep 16 04:58:32.323306 containerd[1722]: time="2025-09-16T04:58:32.323280455Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 04:58:32.850565 kubelet[3168]: I0916 04:58:32.850343 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t85f6" podStartSLOduration=1.85032521 podStartE2EDuration="1.85032521s" podCreationTimestamp="2025-09-16 04:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:32.850139059 +0000 UTC m=+7.151320188" watchObservedRunningTime="2025-09-16 04:58:32.85032521 +0000 UTC m=+7.151506336" Sep 16 04:58:34.265528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327104206.mount: Deactivated successfully. Sep 16 04:58:35.092107 containerd[1722]: time="2025-09-16T04:58:35.092067109Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:35.094324 containerd[1722]: time="2025-09-16T04:58:35.094295124Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 16 04:58:35.097228 containerd[1722]: time="2025-09-16T04:58:35.097187991Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:35.101203 containerd[1722]: time="2025-09-16T04:58:35.100738691Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:35.101203 containerd[1722]: time="2025-09-16T04:58:35.101107214Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.777791851s" Sep 16 04:58:35.101203 containerd[1722]: time="2025-09-16T04:58:35.101133955Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 16 04:58:35.104019 containerd[1722]: time="2025-09-16T04:58:35.103972309Z" level=info msg="CreateContainer within sandbox \"0dabeb9658f92f027c5fa4b81d967a802446bc797204030e1c2b6314de357b97\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 04:58:35.132566 containerd[1722]: time="2025-09-16T04:58:35.132538766Z" level=info msg="Container ad999a8da5aaae525008a9dcb466c08e9b6941315fdcf5c129f7689a734ace16: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:35.147844 containerd[1722]: time="2025-09-16T04:58:35.147815975Z" level=info msg="CreateContainer within sandbox \"0dabeb9658f92f027c5fa4b81d967a802446bc797204030e1c2b6314de357b97\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ad999a8da5aaae525008a9dcb466c08e9b6941315fdcf5c129f7689a734ace16\"" Sep 16 04:58:35.148331 containerd[1722]: time="2025-09-16T04:58:35.148295664Z" level=info msg="StartContainer for \"ad999a8da5aaae525008a9dcb466c08e9b6941315fdcf5c129f7689a734ace16\"" Sep 16 04:58:35.149522 containerd[1722]: time="2025-09-16T04:58:35.149449783Z" level=info msg="connecting to shim ad999a8da5aaae525008a9dcb466c08e9b6941315fdcf5c129f7689a734ace16" address="unix:///run/containerd/s/72e4c9f1aef183125d636c933b910429c6d0e23c45695a3080329f1d93182b03" protocol=ttrpc version=3 Sep 16 04:58:35.171137 systemd[1]: Started cri-containerd-ad999a8da5aaae525008a9dcb466c08e9b6941315fdcf5c129f7689a734ace16.scope - libcontainer container ad999a8da5aaae525008a9dcb466c08e9b6941315fdcf5c129f7689a734ace16. Sep 16 04:58:35.200901 containerd[1722]: time="2025-09-16T04:58:35.200878140Z" level=info msg="StartContainer for \"ad999a8da5aaae525008a9dcb466c08e9b6941315fdcf5c129f7689a734ace16\" returns successfully" Sep 16 04:58:36.808481 kubelet[3168]: I0916 04:58:36.808369 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-rhv7v" podStartSLOduration=3.02910336 podStartE2EDuration="5.808351484s" podCreationTimestamp="2025-09-16 04:58:31 +0000 UTC" firstStartedPulling="2025-09-16 04:58:32.322630649 +0000 UTC m=+6.623811778" lastFinishedPulling="2025-09-16 04:58:35.101878787 +0000 UTC m=+9.403059902" observedRunningTime="2025-09-16 04:58:35.872960452 +0000 UTC m=+10.174141580" watchObservedRunningTime="2025-09-16 04:58:36.808351484 +0000 UTC m=+11.109532617" Sep 16 04:58:40.897223 sudo[2184]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:41.000921 sshd[2183]: Connection closed by 10.200.16.10 port 51512 Sep 16 04:58:41.004156 sshd-session[2180]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:41.007618 systemd[1]: sshd@6-10.200.8.40:22-10.200.16.10:51512.service: Deactivated successfully. Sep 16 04:58:41.010507 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:58:41.010900 systemd[1]: session-9.scope: Consumed 3.696s CPU time, 223M memory peak. Sep 16 04:58:41.016741 systemd-logind[1693]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:58:41.017834 systemd-logind[1693]: Removed session 9. Sep 16 04:58:44.613792 systemd[1]: Created slice kubepods-besteffort-podcdbe35be_87c1_4219_95e7_431251e29e77.slice - libcontainer container kubepods-besteffort-podcdbe35be_87c1_4219_95e7_431251e29e77.slice. Sep 16 04:58:44.710199 kubelet[3168]: I0916 04:58:44.710165 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdbe35be-87c1-4219-95e7-431251e29e77-tigera-ca-bundle\") pod \"calico-typha-cd77d98f8-qkb9s\" (UID: \"cdbe35be-87c1-4219-95e7-431251e29e77\") " pod="calico-system/calico-typha-cd77d98f8-qkb9s" Sep 16 04:58:44.710199 kubelet[3168]: I0916 04:58:44.710211 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqgx8\" (UniqueName: \"kubernetes.io/projected/cdbe35be-87c1-4219-95e7-431251e29e77-kube-api-access-mqgx8\") pod \"calico-typha-cd77d98f8-qkb9s\" (UID: \"cdbe35be-87c1-4219-95e7-431251e29e77\") " pod="calico-system/calico-typha-cd77d98f8-qkb9s" Sep 16 04:58:44.710563 kubelet[3168]: I0916 04:58:44.710326 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cdbe35be-87c1-4219-95e7-431251e29e77-typha-certs\") pod \"calico-typha-cd77d98f8-qkb9s\" (UID: \"cdbe35be-87c1-4219-95e7-431251e29e77\") " pod="calico-system/calico-typha-cd77d98f8-qkb9s" Sep 16 04:58:44.920737 containerd[1722]: time="2025-09-16T04:58:44.920450671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cd77d98f8-qkb9s,Uid:cdbe35be-87c1-4219-95e7-431251e29e77,Namespace:calico-system,Attempt:0,}" Sep 16 04:58:44.968642 systemd[1]: Created slice kubepods-besteffort-pod8082818c_4ac2_4403_917f_5e1b86b22223.slice - libcontainer container kubepods-besteffort-pod8082818c_4ac2_4403_917f_5e1b86b22223.slice. Sep 16 04:58:44.977005 containerd[1722]: time="2025-09-16T04:58:44.976102286Z" level=info msg="connecting to shim 3ab823c140cf5d024cda80ee6df239147a754060a9de4cea91c3a878b6108c88" address="unix:///run/containerd/s/8e5aefbb02af6055e0c075d19e390adffc624df20f781c203fa320f7fb02e196" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:45.006136 systemd[1]: Started cri-containerd-3ab823c140cf5d024cda80ee6df239147a754060a9de4cea91c3a878b6108c88.scope - libcontainer container 3ab823c140cf5d024cda80ee6df239147a754060a9de4cea91c3a878b6108c88. Sep 16 04:58:45.016327 kubelet[3168]: I0916 04:58:45.015816 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8082818c-4ac2-4403-917f-5e1b86b22223-node-certs\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016417 kubelet[3168]: I0916 04:58:45.016331 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8082818c-4ac2-4403-917f-5e1b86b22223-tigera-ca-bundle\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016417 kubelet[3168]: I0916 04:58:45.016365 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-xtables-lock\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016417 kubelet[3168]: I0916 04:58:45.016387 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2pfd\" (UniqueName: \"kubernetes.io/projected/8082818c-4ac2-4403-917f-5e1b86b22223-kube-api-access-q2pfd\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016417 kubelet[3168]: I0916 04:58:45.016411 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-lib-modules\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016520 kubelet[3168]: I0916 04:58:45.016434 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-cni-bin-dir\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016520 kubelet[3168]: I0916 04:58:45.016461 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-var-lib-calico\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016520 kubelet[3168]: I0916 04:58:45.016479 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-var-run-calico\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016520 kubelet[3168]: I0916 04:58:45.016502 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-cni-log-dir\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016621 kubelet[3168]: I0916 04:58:45.016527 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-cni-net-dir\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016621 kubelet[3168]: I0916 04:58:45.016552 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-flexvol-driver-host\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.016621 kubelet[3168]: I0916 04:58:45.016572 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8082818c-4ac2-4403-917f-5e1b86b22223-policysync\") pod \"calico-node-hb4vn\" (UID: \"8082818c-4ac2-4403-917f-5e1b86b22223\") " pod="calico-system/calico-node-hb4vn" Sep 16 04:58:45.064302 containerd[1722]: time="2025-09-16T04:58:45.064252935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cd77d98f8-qkb9s,Uid:cdbe35be-87c1-4219-95e7-431251e29e77,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ab823c140cf5d024cda80ee6df239147a754060a9de4cea91c3a878b6108c88\"" Sep 16 04:58:45.067330 containerd[1722]: time="2025-09-16T04:58:45.067228204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 04:58:45.117682 kubelet[3168]: E0916 04:58:45.117660 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.117682 kubelet[3168]: W0916 04:58:45.117681 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.117796 kubelet[3168]: E0916 04:58:45.117735 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.118027 kubelet[3168]: E0916 04:58:45.117942 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.118027 kubelet[3168]: W0916 04:58:45.117950 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.118027 kubelet[3168]: E0916 04:58:45.117966 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.118130 kubelet[3168]: E0916 04:58:45.118113 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.118130 kubelet[3168]: W0916 04:58:45.118120 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.118180 kubelet[3168]: E0916 04:58:45.118133 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.118376 kubelet[3168]: E0916 04:58:45.118250 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.118376 kubelet[3168]: W0916 04:58:45.118260 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.118376 kubelet[3168]: E0916 04:58:45.118269 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.118376 kubelet[3168]: E0916 04:58:45.118361 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.118376 kubelet[3168]: W0916 04:58:45.118367 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.118376 kubelet[3168]: E0916 04:58:45.118374 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.118531 kubelet[3168]: E0916 04:58:45.118472 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.118531 kubelet[3168]: W0916 04:58:45.118478 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.118531 kubelet[3168]: E0916 04:58:45.118484 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.119006 kubelet[3168]: E0916 04:58:45.118651 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.119006 kubelet[3168]: W0916 04:58:45.118659 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.119006 kubelet[3168]: E0916 04:58:45.118671 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.119006 kubelet[3168]: E0916 04:58:45.118800 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.119006 kubelet[3168]: W0916 04:58:45.118805 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.119006 kubelet[3168]: E0916 04:58:45.118813 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.121965 kubelet[3168]: E0916 04:58:45.119839 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.121965 kubelet[3168]: W0916 04:58:45.120016 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.121965 kubelet[3168]: E0916 04:58:45.120038 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.122176 kubelet[3168]: E0916 04:58:45.122164 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.122243 kubelet[3168]: W0916 04:58:45.122234 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.122303 kubelet[3168]: E0916 04:58:45.122278 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.124099 kubelet[3168]: E0916 04:58:45.124082 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.124289 kubelet[3168]: W0916 04:58:45.124221 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.124289 kubelet[3168]: E0916 04:58:45.124241 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.129221 kubelet[3168]: E0916 04:58:45.129052 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.129221 kubelet[3168]: W0916 04:58:45.129069 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.129221 kubelet[3168]: E0916 04:58:45.129085 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.130060 kubelet[3168]: E0916 04:58:45.130040 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.130060 kubelet[3168]: W0916 04:58:45.130056 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.130157 kubelet[3168]: E0916 04:58:45.130071 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.130219 kubelet[3168]: E0916 04:58:45.130207 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.130219 kubelet[3168]: W0916 04:58:45.130215 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.130334 kubelet[3168]: E0916 04:58:45.130224 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.130674 kubelet[3168]: E0916 04:58:45.130659 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.130674 kubelet[3168]: W0916 04:58:45.130673 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.130748 kubelet[3168]: E0916 04:58:45.130683 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.131031 kubelet[3168]: E0916 04:58:45.131015 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.131031 kubelet[3168]: W0916 04:58:45.131027 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.131119 kubelet[3168]: E0916 04:58:45.131040 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.131204 kubelet[3168]: E0916 04:58:45.131194 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.131204 kubelet[3168]: W0916 04:58:45.131202 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.131270 kubelet[3168]: E0916 04:58:45.131211 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.131382 kubelet[3168]: E0916 04:58:45.131358 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.131433 kubelet[3168]: W0916 04:58:45.131424 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.131491 kubelet[3168]: E0916 04:58:45.131483 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.132504 kubelet[3168]: E0916 04:58:45.132484 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.132504 kubelet[3168]: W0916 04:58:45.132500 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.132587 kubelet[3168]: E0916 04:58:45.132513 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.245558 kubelet[3168]: E0916 04:58:45.245302 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:58:45.277369 containerd[1722]: time="2025-09-16T04:58:45.277143340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hb4vn,Uid:8082818c-4ac2-4403-917f-5e1b86b22223,Namespace:calico-system,Attempt:0,}" Sep 16 04:58:45.298610 kubelet[3168]: E0916 04:58:45.298581 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.298610 kubelet[3168]: W0916 04:58:45.298606 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.298717 kubelet[3168]: E0916 04:58:45.298621 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.298746 kubelet[3168]: E0916 04:58:45.298736 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.298746 kubelet[3168]: W0916 04:58:45.298744 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.298799 kubelet[3168]: E0916 04:58:45.298752 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.298864 kubelet[3168]: E0916 04:58:45.298852 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.298864 kubelet[3168]: W0916 04:58:45.298860 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.298933 kubelet[3168]: E0916 04:58:45.298867 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.299014 kubelet[3168]: E0916 04:58:45.299005 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.299043 kubelet[3168]: W0916 04:58:45.299015 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.299043 kubelet[3168]: E0916 04:58:45.299024 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.299449 kubelet[3168]: E0916 04:58:45.299438 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.299449 kubelet[3168]: W0916 04:58:45.299449 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.299517 kubelet[3168]: E0916 04:58:45.299460 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.299634 kubelet[3168]: E0916 04:58:45.299616 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.299634 kubelet[3168]: W0916 04:58:45.299628 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.299741 kubelet[3168]: E0916 04:58:45.299636 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.299741 kubelet[3168]: E0916 04:58:45.299737 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.299794 kubelet[3168]: W0916 04:58:45.299743 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.299794 kubelet[3168]: E0916 04:58:45.299750 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.299866 kubelet[3168]: E0916 04:58:45.299851 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.299866 kubelet[3168]: W0916 04:58:45.299857 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.299866 kubelet[3168]: E0916 04:58:45.299863 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300044 kubelet[3168]: E0916 04:58:45.300016 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300044 kubelet[3168]: W0916 04:58:45.300040 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.300122 kubelet[3168]: E0916 04:58:45.300049 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300150 kubelet[3168]: E0916 04:58:45.300141 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300150 kubelet[3168]: W0916 04:58:45.300146 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.300203 kubelet[3168]: E0916 04:58:45.300152 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300249 kubelet[3168]: E0916 04:58:45.300230 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300249 kubelet[3168]: W0916 04:58:45.300235 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.300249 kubelet[3168]: E0916 04:58:45.300241 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300393 kubelet[3168]: E0916 04:58:45.300347 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300393 kubelet[3168]: W0916 04:58:45.300352 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.300393 kubelet[3168]: E0916 04:58:45.300359 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300539 kubelet[3168]: E0916 04:58:45.300474 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300539 kubelet[3168]: W0916 04:58:45.300479 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.300539 kubelet[3168]: E0916 04:58:45.300485 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300656 kubelet[3168]: E0916 04:58:45.300595 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300656 kubelet[3168]: W0916 04:58:45.300601 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.300656 kubelet[3168]: E0916 04:58:45.300607 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300746 kubelet[3168]: E0916 04:58:45.300697 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300746 kubelet[3168]: W0916 04:58:45.300702 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.300746 kubelet[3168]: E0916 04:58:45.300708 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300827 kubelet[3168]: E0916 04:58:45.300796 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300827 kubelet[3168]: W0916 04:58:45.300801 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.300827 kubelet[3168]: E0916 04:58:45.300819 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.300982 kubelet[3168]: E0916 04:58:45.300964 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.300982 kubelet[3168]: W0916 04:58:45.300979 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.301058 kubelet[3168]: E0916 04:58:45.301020 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.301173 kubelet[3168]: E0916 04:58:45.301161 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.301173 kubelet[3168]: W0916 04:58:45.301169 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.301228 kubelet[3168]: E0916 04:58:45.301176 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.301297 kubelet[3168]: E0916 04:58:45.301277 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.301297 kubelet[3168]: W0916 04:58:45.301295 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.301344 kubelet[3168]: E0916 04:58:45.301301 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.301400 kubelet[3168]: E0916 04:58:45.301389 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.301400 kubelet[3168]: W0916 04:58:45.301397 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.301452 kubelet[3168]: E0916 04:58:45.301403 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.318684 kubelet[3168]: E0916 04:58:45.318665 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.318684 kubelet[3168]: W0916 04:58:45.318680 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.318797 kubelet[3168]: E0916 04:58:45.318694 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.318797 kubelet[3168]: I0916 04:58:45.318717 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8024408a-1213-49b8-a3b9-01ab011892e6-kubelet-dir\") pod \"csi-node-driver-f7dxh\" (UID: \"8024408a-1213-49b8-a3b9-01ab011892e6\") " pod="calico-system/csi-node-driver-f7dxh" Sep 16 04:58:45.318878 kubelet[3168]: E0916 04:58:45.318848 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.318878 kubelet[3168]: W0916 04:58:45.318855 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.318878 kubelet[3168]: E0916 04:58:45.318864 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.319010 kubelet[3168]: I0916 04:58:45.318970 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8024408a-1213-49b8-a3b9-01ab011892e6-socket-dir\") pod \"csi-node-driver-f7dxh\" (UID: \"8024408a-1213-49b8-a3b9-01ab011892e6\") " pod="calico-system/csi-node-driver-f7dxh" Sep 16 04:58:45.319045 kubelet[3168]: E0916 04:58:45.319042 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.319100 kubelet[3168]: W0916 04:58:45.319048 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.319100 kubelet[3168]: E0916 04:58:45.319060 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.319210 kubelet[3168]: E0916 04:58:45.319154 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.319210 kubelet[3168]: W0916 04:58:45.319160 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.319210 kubelet[3168]: E0916 04:58:45.319171 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.319322 kubelet[3168]: E0916 04:58:45.319274 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.319322 kubelet[3168]: W0916 04:58:45.319279 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.319322 kubelet[3168]: E0916 04:58:45.319297 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.319322 kubelet[3168]: I0916 04:58:45.319314 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms54z\" (UniqueName: \"kubernetes.io/projected/8024408a-1213-49b8-a3b9-01ab011892e6-kube-api-access-ms54z\") pod \"csi-node-driver-f7dxh\" (UID: \"8024408a-1213-49b8-a3b9-01ab011892e6\") " pod="calico-system/csi-node-driver-f7dxh" Sep 16 04:58:45.319463 kubelet[3168]: E0916 04:58:45.319434 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.319463 kubelet[3168]: W0916 04:58:45.319441 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.319463 kubelet[3168]: E0916 04:58:45.319454 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.319583 kubelet[3168]: I0916 04:58:45.319468 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8024408a-1213-49b8-a3b9-01ab011892e6-registration-dir\") pod \"csi-node-driver-f7dxh\" (UID: \"8024408a-1213-49b8-a3b9-01ab011892e6\") " pod="calico-system/csi-node-driver-f7dxh" Sep 16 04:58:45.319619 kubelet[3168]: E0916 04:58:45.319597 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.319619 kubelet[3168]: W0916 04:58:45.319604 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.319619 kubelet[3168]: E0916 04:58:45.319615 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.319776 kubelet[3168]: I0916 04:58:45.319629 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8024408a-1213-49b8-a3b9-01ab011892e6-varrun\") pod \"csi-node-driver-f7dxh\" (UID: \"8024408a-1213-49b8-a3b9-01ab011892e6\") " pod="calico-system/csi-node-driver-f7dxh" Sep 16 04:58:45.319879 kubelet[3168]: E0916 04:58:45.319864 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.319921 kubelet[3168]: W0916 04:58:45.319876 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.319921 kubelet[3168]: E0916 04:58:45.319895 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.320050 kubelet[3168]: E0916 04:58:45.320039 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.320050 kubelet[3168]: W0916 04:58:45.320048 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.320128 kubelet[3168]: E0916 04:58:45.320060 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.320203 kubelet[3168]: E0916 04:58:45.320191 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.320203 kubelet[3168]: W0916 04:58:45.320200 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.320273 kubelet[3168]: E0916 04:58:45.320209 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.320334 kubelet[3168]: E0916 04:58:45.320332 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.320371 kubelet[3168]: W0916 04:58:45.320338 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.320371 kubelet[3168]: E0916 04:58:45.320348 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.320477 kubelet[3168]: E0916 04:58:45.320468 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.320508 kubelet[3168]: W0916 04:58:45.320485 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.320508 kubelet[3168]: E0916 04:58:45.320495 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.320592 kubelet[3168]: E0916 04:58:45.320584 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.320616 kubelet[3168]: W0916 04:58:45.320593 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.320616 kubelet[3168]: E0916 04:58:45.320608 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.320739 kubelet[3168]: E0916 04:58:45.320729 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.320739 kubelet[3168]: W0916 04:58:45.320737 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.320829 kubelet[3168]: E0916 04:58:45.320744 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.320856 kubelet[3168]: E0916 04:58:45.320836 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.320856 kubelet[3168]: W0916 04:58:45.320844 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.320856 kubelet[3168]: E0916 04:58:45.320850 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.329330 containerd[1722]: time="2025-09-16T04:58:45.329301251Z" level=info msg="connecting to shim d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf" address="unix:///run/containerd/s/760a6db2185b92df0b8aa148d55f15b5ed7183180227cb3a6a2c653f26ff4b7a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:45.346165 systemd[1]: Started cri-containerd-d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf.scope - libcontainer container d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf. Sep 16 04:58:45.369005 containerd[1722]: time="2025-09-16T04:58:45.368949481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hb4vn,Uid:8082818c-4ac2-4403-917f-5e1b86b22223,Namespace:calico-system,Attempt:0,} returns sandbox id \"d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf\"" Sep 16 04:58:45.421089 kubelet[3168]: E0916 04:58:45.421061 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.421262 kubelet[3168]: W0916 04:58:45.421079 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.421262 kubelet[3168]: E0916 04:58:45.421177 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.421365 kubelet[3168]: E0916 04:58:45.421360 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.421407 kubelet[3168]: W0916 04:58:45.421401 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.421473 kubelet[3168]: E0916 04:58:45.421432 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.421628 kubelet[3168]: E0916 04:58:45.421622 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.421671 kubelet[3168]: W0916 04:58:45.421665 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.421708 kubelet[3168]: E0916 04:58:45.421703 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.421828 kubelet[3168]: E0916 04:58:45.421822 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.421944 kubelet[3168]: W0916 04:58:45.421872 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.422512 kubelet[3168]: E0916 04:58:45.421979 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.422832 kubelet[3168]: E0916 04:58:45.422810 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.422832 kubelet[3168]: W0916 04:58:45.422821 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.423978 kubelet[3168]: E0916 04:58:45.423883 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.424460 kubelet[3168]: E0916 04:58:45.424447 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.424608 kubelet[3168]: W0916 04:58:45.424517 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.424608 kubelet[3168]: E0916 04:58:45.424550 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.424947 kubelet[3168]: E0916 04:58:45.424919 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.424947 kubelet[3168]: W0916 04:58:45.424933 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.425180 kubelet[3168]: E0916 04:58:45.425170 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.425568 kubelet[3168]: E0916 04:58:45.425273 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.425568 kubelet[3168]: W0916 04:58:45.425281 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.425696 kubelet[3168]: E0916 04:58:45.425683 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.426197 kubelet[3168]: E0916 04:58:45.426166 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.426197 kubelet[3168]: W0916 04:58:45.426181 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.426611 kubelet[3168]: E0916 04:58:45.426540 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.427009 kubelet[3168]: E0916 04:58:45.426956 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.427009 kubelet[3168]: W0916 04:58:45.426970 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.427443 kubelet[3168]: E0916 04:58:45.427350 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.427792 kubelet[3168]: E0916 04:58:45.427779 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.427958 kubelet[3168]: W0916 04:58:45.427929 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.428569 kubelet[3168]: E0916 04:58:45.428553 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.428829 kubelet[3168]: E0916 04:58:45.428808 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.428829 kubelet[3168]: W0916 04:58:45.428818 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.428939 kubelet[3168]: E0916 04:58:45.428904 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.429094 kubelet[3168]: E0916 04:58:45.429088 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.429144 kubelet[3168]: W0916 04:58:45.429122 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.429210 kubelet[3168]: E0916 04:58:45.429199 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.429328 kubelet[3168]: E0916 04:58:45.429316 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.429328 kubelet[3168]: W0916 04:58:45.429321 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.429418 kubelet[3168]: E0916 04:58:45.429413 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.429521 kubelet[3168]: E0916 04:58:45.429510 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.429521 kubelet[3168]: W0916 04:58:45.429515 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.429598 kubelet[3168]: E0916 04:58:45.429586 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.429701 kubelet[3168]: E0916 04:58:45.429691 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.429701 kubelet[3168]: W0916 04:58:45.429695 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.429770 kubelet[3168]: E0916 04:58:45.429733 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.429890 kubelet[3168]: E0916 04:58:45.429878 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.429890 kubelet[3168]: W0916 04:58:45.429884 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.429939 kubelet[3168]: E0916 04:58:45.429928 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.430100 kubelet[3168]: E0916 04:58:45.430086 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.430174 kubelet[3168]: W0916 04:58:45.430128 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.430174 kubelet[3168]: E0916 04:58:45.430139 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.430333 kubelet[3168]: E0916 04:58:45.430321 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.430333 kubelet[3168]: W0916 04:58:45.430331 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.430385 kubelet[3168]: E0916 04:58:45.430347 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.430492 kubelet[3168]: E0916 04:58:45.430483 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.430492 kubelet[3168]: W0916 04:58:45.430491 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.430539 kubelet[3168]: E0916 04:58:45.430505 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.430790 kubelet[3168]: E0916 04:58:45.430779 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.430790 kubelet[3168]: W0916 04:58:45.430789 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.431139 kubelet[3168]: E0916 04:58:45.430900 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.431191 kubelet[3168]: E0916 04:58:45.431159 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.431191 kubelet[3168]: W0916 04:58:45.431168 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.431191 kubelet[3168]: E0916 04:58:45.431181 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.431602 kubelet[3168]: E0916 04:58:45.431529 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.431602 kubelet[3168]: W0916 04:58:45.431545 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.431602 kubelet[3168]: E0916 04:58:45.431557 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.431781 kubelet[3168]: E0916 04:58:45.431773 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.431810 kubelet[3168]: W0916 04:58:45.431784 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.431841 kubelet[3168]: E0916 04:58:45.431836 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.432120 kubelet[3168]: E0916 04:58:45.432099 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.432120 kubelet[3168]: W0916 04:58:45.432109 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.432120 kubelet[3168]: E0916 04:58:45.432119 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.454291 kubelet[3168]: E0916 04:58:45.454275 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:45.454291 kubelet[3168]: W0916 04:58:45.454291 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:45.454385 kubelet[3168]: E0916 04:58:45.454303 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:46.581235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171053357.mount: Deactivated successfully. Sep 16 04:58:46.793393 kubelet[3168]: E0916 04:58:46.793349 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:58:47.217290 containerd[1722]: time="2025-09-16T04:58:47.217249443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:47.219847 containerd[1722]: time="2025-09-16T04:58:47.219727482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 16 04:58:47.222211 containerd[1722]: time="2025-09-16T04:58:47.222185594Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:47.227177 containerd[1722]: time="2025-09-16T04:58:47.227147122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:47.227610 containerd[1722]: time="2025-09-16T04:58:47.227555716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.160278085s" Sep 16 04:58:47.227688 containerd[1722]: time="2025-09-16T04:58:47.227675163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 16 04:58:47.228668 containerd[1722]: time="2025-09-16T04:58:47.228645359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 04:58:47.241742 containerd[1722]: time="2025-09-16T04:58:47.241713910Z" level=info msg="CreateContainer within sandbox \"3ab823c140cf5d024cda80ee6df239147a754060a9de4cea91c3a878b6108c88\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 04:58:47.272819 containerd[1722]: time="2025-09-16T04:58:47.272793038Z" level=info msg="Container 76264bebffeef59c17b63920e5b91b2168f849466f1e51b049ace4276299a8fa: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:47.297522 containerd[1722]: time="2025-09-16T04:58:47.297487336Z" level=info msg="CreateContainer within sandbox \"3ab823c140cf5d024cda80ee6df239147a754060a9de4cea91c3a878b6108c88\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"76264bebffeef59c17b63920e5b91b2168f849466f1e51b049ace4276299a8fa\"" Sep 16 04:58:47.298032 containerd[1722]: time="2025-09-16T04:58:47.298013098Z" level=info msg="StartContainer for \"76264bebffeef59c17b63920e5b91b2168f849466f1e51b049ace4276299a8fa\"" Sep 16 04:58:47.299878 containerd[1722]: time="2025-09-16T04:58:47.299846209Z" level=info msg="connecting to shim 76264bebffeef59c17b63920e5b91b2168f849466f1e51b049ace4276299a8fa" address="unix:///run/containerd/s/8e5aefbb02af6055e0c075d19e390adffc624df20f781c203fa320f7fb02e196" protocol=ttrpc version=3 Sep 16 04:58:47.334127 systemd[1]: Started cri-containerd-76264bebffeef59c17b63920e5b91b2168f849466f1e51b049ace4276299a8fa.scope - libcontainer container 76264bebffeef59c17b63920e5b91b2168f849466f1e51b049ace4276299a8fa. Sep 16 04:58:47.386956 containerd[1722]: time="2025-09-16T04:58:47.386926185Z" level=info msg="StartContainer for \"76264bebffeef59c17b63920e5b91b2168f849466f1e51b049ace4276299a8fa\" returns successfully" Sep 16 04:58:47.914461 kubelet[3168]: E0916 04:58:47.914433 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.914461 kubelet[3168]: W0916 04:58:47.914452 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.914978 kubelet[3168]: E0916 04:58:47.914471 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.914978 kubelet[3168]: E0916 04:58:47.914590 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.914978 kubelet[3168]: W0916 04:58:47.914597 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.914978 kubelet[3168]: E0916 04:58:47.914604 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.914978 kubelet[3168]: E0916 04:58:47.914698 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.914978 kubelet[3168]: W0916 04:58:47.914702 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.914978 kubelet[3168]: E0916 04:58:47.914709 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.914978 kubelet[3168]: E0916 04:58:47.914834 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.914978 kubelet[3168]: W0916 04:58:47.914839 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.914978 kubelet[3168]: E0916 04:58:47.914846 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915279 kubelet[3168]: E0916 04:58:47.914938 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915279 kubelet[3168]: W0916 04:58:47.914942 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915279 kubelet[3168]: E0916 04:58:47.914948 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915279 kubelet[3168]: E0916 04:58:47.915058 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915279 kubelet[3168]: W0916 04:58:47.915064 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915279 kubelet[3168]: E0916 04:58:47.915071 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915279 kubelet[3168]: E0916 04:58:47.915165 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915279 kubelet[3168]: W0916 04:58:47.915171 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915279 kubelet[3168]: E0916 04:58:47.915177 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915279 kubelet[3168]: E0916 04:58:47.915266 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915515 kubelet[3168]: W0916 04:58:47.915272 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915515 kubelet[3168]: E0916 04:58:47.915277 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915515 kubelet[3168]: E0916 04:58:47.915370 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915515 kubelet[3168]: W0916 04:58:47.915375 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915515 kubelet[3168]: E0916 04:58:47.915380 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915515 kubelet[3168]: E0916 04:58:47.915465 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915515 kubelet[3168]: W0916 04:58:47.915470 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915515 kubelet[3168]: E0916 04:58:47.915478 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915701 kubelet[3168]: E0916 04:58:47.915564 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915701 kubelet[3168]: W0916 04:58:47.915569 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915701 kubelet[3168]: E0916 04:58:47.915575 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915701 kubelet[3168]: E0916 04:58:47.915660 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915701 kubelet[3168]: W0916 04:58:47.915665 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915701 kubelet[3168]: E0916 04:58:47.915671 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915840 kubelet[3168]: E0916 04:58:47.915758 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915840 kubelet[3168]: W0916 04:58:47.915762 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915840 kubelet[3168]: E0916 04:58:47.915768 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915912 kubelet[3168]: E0916 04:58:47.915876 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915912 kubelet[3168]: W0916 04:58:47.915882 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.915912 kubelet[3168]: E0916 04:58:47.915887 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.915981 kubelet[3168]: E0916 04:58:47.915972 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.915981 kubelet[3168]: W0916 04:58:47.915977 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.916060 kubelet[3168]: E0916 04:58:47.915983 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.944552 kubelet[3168]: E0916 04:58:47.944526 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.944552 kubelet[3168]: W0916 04:58:47.944544 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.944709 kubelet[3168]: E0916 04:58:47.944559 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.944748 kubelet[3168]: E0916 04:58:47.944740 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.944787 kubelet[3168]: W0916 04:58:47.944749 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.944787 kubelet[3168]: E0916 04:58:47.944765 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.944967 kubelet[3168]: E0916 04:58:47.944937 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.944967 kubelet[3168]: W0916 04:58:47.944962 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.945200 kubelet[3168]: E0916 04:58:47.944976 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.945345 kubelet[3168]: E0916 04:58:47.945249 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.945345 kubelet[3168]: W0916 04:58:47.945258 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.945345 kubelet[3168]: E0916 04:58:47.945273 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.945635 kubelet[3168]: E0916 04:58:47.945610 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.945635 kubelet[3168]: W0916 04:58:47.945622 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.945805 kubelet[3168]: E0916 04:58:47.945685 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.945998 kubelet[3168]: E0916 04:58:47.945938 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.945998 kubelet[3168]: W0916 04:58:47.945948 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.946067 kubelet[3168]: E0916 04:58:47.946006 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.946304 kubelet[3168]: E0916 04:58:47.946252 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.946304 kubelet[3168]: W0916 04:58:47.946265 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.946304 kubelet[3168]: E0916 04:58:47.946293 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.946543 kubelet[3168]: E0916 04:58:47.946505 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.946543 kubelet[3168]: W0916 04:58:47.946515 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.946607 kubelet[3168]: E0916 04:58:47.946551 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.946890 kubelet[3168]: E0916 04:58:47.946798 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.946890 kubelet[3168]: W0916 04:58:47.946814 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.946890 kubelet[3168]: E0916 04:58:47.946828 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.947412 kubelet[3168]: E0916 04:58:47.947401 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.947569 kubelet[3168]: W0916 04:58:47.947478 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.947569 kubelet[3168]: E0916 04:58:47.947492 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.947855 kubelet[3168]: E0916 04:58:47.947832 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.947855 kubelet[3168]: W0916 04:58:47.947842 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.948026 kubelet[3168]: E0916 04:58:47.947954 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.948236 kubelet[3168]: E0916 04:58:47.948212 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.948236 kubelet[3168]: W0916 04:58:47.948223 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.948526 kubelet[3168]: E0916 04:58:47.948378 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.948666 kubelet[3168]: E0916 04:58:47.948656 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.948715 kubelet[3168]: W0916 04:58:47.948707 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.948854 kubelet[3168]: E0916 04:58:47.948844 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.948949 kubelet[3168]: E0916 04:58:47.948930 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.948949 kubelet[3168]: W0916 04:58:47.948938 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.949183 kubelet[3168]: E0916 04:58:47.949149 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.949183 kubelet[3168]: E0916 04:58:47.949213 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.949183 kubelet[3168]: W0916 04:58:47.949223 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.949183 kubelet[3168]: E0916 04:58:47.949235 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.949441 kubelet[3168]: E0916 04:58:47.949366 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.949441 kubelet[3168]: W0916 04:58:47.949373 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.949441 kubelet[3168]: E0916 04:58:47.949382 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.949520 kubelet[3168]: E0916 04:58:47.949514 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.949520 kubelet[3168]: W0916 04:58:47.949519 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.949573 kubelet[3168]: E0916 04:58:47.949526 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.950066 kubelet[3168]: E0916 04:58:47.949923 3168 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.950066 kubelet[3168]: W0916 04:58:47.949938 3168 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.950066 kubelet[3168]: E0916 04:58:47.949950 3168 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:48.642233 containerd[1722]: time="2025-09-16T04:58:48.642189386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:48.644789 containerd[1722]: time="2025-09-16T04:58:48.644744763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 16 04:58:48.648392 containerd[1722]: time="2025-09-16T04:58:48.648231650Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:48.653675 containerd[1722]: time="2025-09-16T04:58:48.653528954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:48.654322 containerd[1722]: time="2025-09-16T04:58:48.654299002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.425455677s" Sep 16 04:58:48.654525 containerd[1722]: time="2025-09-16T04:58:48.654506887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 16 04:58:48.656966 containerd[1722]: time="2025-09-16T04:58:48.656930952Z" level=info msg="CreateContainer within sandbox \"d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 04:58:48.677753 containerd[1722]: time="2025-09-16T04:58:48.677724132Z" level=info msg="Container 84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:48.691639 containerd[1722]: time="2025-09-16T04:58:48.691609275Z" level=info msg="CreateContainer within sandbox \"d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1\"" Sep 16 04:58:48.692198 containerd[1722]: time="2025-09-16T04:58:48.692173257Z" level=info msg="StartContainer for \"84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1\"" Sep 16 04:58:48.693517 containerd[1722]: time="2025-09-16T04:58:48.693491719Z" level=info msg="connecting to shim 84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1" address="unix:///run/containerd/s/760a6db2185b92df0b8aa148d55f15b5ed7183180227cb3a6a2c653f26ff4b7a" protocol=ttrpc version=3 Sep 16 04:58:48.716130 systemd[1]: Started cri-containerd-84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1.scope - libcontainer container 84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1. Sep 16 04:58:48.751839 systemd[1]: cri-containerd-84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1.scope: Deactivated successfully. Sep 16 04:58:48.753613 containerd[1722]: time="2025-09-16T04:58:48.753572615Z" level=info msg="StartContainer for \"84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1\" returns successfully" Sep 16 04:58:48.759567 containerd[1722]: time="2025-09-16T04:58:48.759535118Z" level=info msg="received exit event container_id:\"84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1\" id:\"84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1\" pid:3845 exited_at:{seconds:1757998728 nanos:757142444}" Sep 16 04:58:48.759720 containerd[1722]: time="2025-09-16T04:58:48.759701592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1\" id:\"84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1\" pid:3845 exited_at:{seconds:1757998728 nanos:757142444}" Sep 16 04:58:48.791345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84517f4cfe99de1ace7ca5003468ae90e2ded571ad328727e84183a5405096f1-rootfs.mount: Deactivated successfully. Sep 16 04:58:48.793826 kubelet[3168]: E0916 04:58:48.793760 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:58:48.877905 kubelet[3168]: I0916 04:58:48.877742 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:58:48.922492 kubelet[3168]: I0916 04:58:48.893386 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cd77d98f8-qkb9s" podStartSLOduration=2.730828455 podStartE2EDuration="4.893367088s" podCreationTimestamp="2025-09-16 04:58:44 +0000 UTC" firstStartedPulling="2025-09-16 04:58:45.065834718 +0000 UTC m=+19.367015839" lastFinishedPulling="2025-09-16 04:58:47.228373343 +0000 UTC m=+21.529554472" observedRunningTime="2025-09-16 04:58:47.889176542 +0000 UTC m=+22.190357671" watchObservedRunningTime="2025-09-16 04:58:48.893367088 +0000 UTC m=+23.194548219" Sep 16 04:58:50.793154 kubelet[3168]: E0916 04:58:50.793096 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:58:50.887491 containerd[1722]: time="2025-09-16T04:58:50.887405748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 04:58:52.793316 kubelet[3168]: E0916 04:58:52.793270 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:58:54.793424 kubelet[3168]: E0916 04:58:54.793381 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:58:55.834121 kubelet[3168]: I0916 04:58:55.834091 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:58:56.053306 containerd[1722]: time="2025-09-16T04:58:56.053264921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:56.056983 containerd[1722]: time="2025-09-16T04:58:56.056794972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 16 04:58:56.059710 containerd[1722]: time="2025-09-16T04:58:56.059684806Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:56.064304 containerd[1722]: time="2025-09-16T04:58:56.064274705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:56.064770 containerd[1722]: time="2025-09-16T04:58:56.064748719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.177299413s" Sep 16 04:58:56.064850 containerd[1722]: time="2025-09-16T04:58:56.064837791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 16 04:58:56.066859 containerd[1722]: time="2025-09-16T04:58:56.066810554Z" level=info msg="CreateContainer within sandbox \"d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 04:58:56.090318 containerd[1722]: time="2025-09-16T04:58:56.090051413Z" level=info msg="Container 4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:56.111629 containerd[1722]: time="2025-09-16T04:58:56.111599952Z" level=info msg="CreateContainer within sandbox \"d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f\"" Sep 16 04:58:56.112101 containerd[1722]: time="2025-09-16T04:58:56.112073368Z" level=info msg="StartContainer for \"4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f\"" Sep 16 04:58:56.113581 containerd[1722]: time="2025-09-16T04:58:56.113552175Z" level=info msg="connecting to shim 4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f" address="unix:///run/containerd/s/760a6db2185b92df0b8aa148d55f15b5ed7183180227cb3a6a2c653f26ff4b7a" protocol=ttrpc version=3 Sep 16 04:58:56.133150 systemd[1]: Started cri-containerd-4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f.scope - libcontainer container 4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f. Sep 16 04:58:56.169547 containerd[1722]: time="2025-09-16T04:58:56.169518521Z" level=info msg="StartContainer for \"4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f\" returns successfully" Sep 16 04:58:56.793456 kubelet[3168]: E0916 04:58:56.793390 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:58:58.793417 kubelet[3168]: E0916 04:58:58.793376 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:59:00.794183 kubelet[3168]: E0916 04:59:00.793295 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:59:02.793640 kubelet[3168]: E0916 04:59:02.793602 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:59:02.917365 containerd[1722]: time="2025-09-16T04:59:02.917297627Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:59:02.919210 systemd[1]: cri-containerd-4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f.scope: Deactivated successfully. Sep 16 04:59:02.920287 systemd[1]: cri-containerd-4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f.scope: Consumed 418ms CPU time, 196.8M memory peak, 171.3M written to disk. Sep 16 04:59:02.921892 containerd[1722]: time="2025-09-16T04:59:02.921856381Z" level=info msg="received exit event container_id:\"4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f\" id:\"4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f\" pid:3909 exited_at:{seconds:1757998742 nanos:921670234}" Sep 16 04:59:02.922727 containerd[1722]: time="2025-09-16T04:59:02.922116206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f\" id:\"4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f\" pid:3909 exited_at:{seconds:1757998742 nanos:921670234}" Sep 16 04:59:02.940526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4effe41b0312ebd46a1de4b0d1665b990bc07695fc3dc6b6b6f7b8d8a9cda91f-rootfs.mount: Deactivated successfully. Sep 16 04:59:03.008630 kubelet[3168]: I0916 04:59:03.007687 3168 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:59:03.064365 systemd[1]: Created slice kubepods-burstable-pod33e9df7e_07bb_4b00_8672_4a43dc40d07a.slice - libcontainer container kubepods-burstable-pod33e9df7e_07bb_4b00_8672_4a43dc40d07a.slice. Sep 16 04:59:03.081945 systemd[1]: Created slice kubepods-burstable-pod10e3ae01_e0e2_43e2_bb04_56dda152ea83.slice - libcontainer container kubepods-burstable-pod10e3ae01_e0e2_43e2_bb04_56dda152ea83.slice. Sep 16 04:59:03.091326 systemd[1]: Created slice kubepods-besteffort-podd5c57cb5_d83b_4987_b370_b0775dc8ba8b.slice - libcontainer container kubepods-besteffort-podd5c57cb5_d83b_4987_b370_b0775dc8ba8b.slice. Sep 16 04:59:03.096220 systemd[1]: Created slice kubepods-besteffort-podcbd62246_8f86_4b8d_8663_ee4d91537759.slice - libcontainer container kubepods-besteffort-podcbd62246_8f86_4b8d_8663_ee4d91537759.slice. Sep 16 04:59:03.109253 systemd[1]: Created slice kubepods-besteffort-podc39eb9d2_7f98_45ea_8394_a2a170878bca.slice - libcontainer container kubepods-besteffort-podc39eb9d2_7f98_45ea_8394_a2a170878bca.slice. Sep 16 04:59:03.116921 systemd[1]: Created slice kubepods-besteffort-podd189562f_463a_410b_8021_6ce20324af99.slice - libcontainer container kubepods-besteffort-podd189562f_463a_410b_8021_6ce20324af99.slice. Sep 16 04:59:03.126874 systemd[1]: Created slice kubepods-besteffort-pod8bb5af6f_a661_4521_9532_6371a28b796e.slice - libcontainer container kubepods-besteffort-pod8bb5af6f_a661_4521_9532_6371a28b796e.slice. Sep 16 04:59:03.149017 kubelet[3168]: I0916 04:59:03.147546 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c39eb9d2-7f98-45ea-8394-a2a170878bca-config\") pod \"goldmane-54d579b49d-5ns66\" (UID: \"c39eb9d2-7f98-45ea-8394-a2a170878bca\") " pod="calico-system/goldmane-54d579b49d-5ns66" Sep 16 04:59:03.149017 kubelet[3168]: I0916 04:59:03.147590 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c39eb9d2-7f98-45ea-8394-a2a170878bca-goldmane-key-pair\") pod \"goldmane-54d579b49d-5ns66\" (UID: \"c39eb9d2-7f98-45ea-8394-a2a170878bca\") " pod="calico-system/goldmane-54d579b49d-5ns66" Sep 16 04:59:03.149017 kubelet[3168]: I0916 04:59:03.147619 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ff9c\" (UniqueName: \"kubernetes.io/projected/c39eb9d2-7f98-45ea-8394-a2a170878bca-kube-api-access-2ff9c\") pod \"goldmane-54d579b49d-5ns66\" (UID: \"c39eb9d2-7f98-45ea-8394-a2a170878bca\") " pod="calico-system/goldmane-54d579b49d-5ns66" Sep 16 04:59:03.149017 kubelet[3168]: I0916 04:59:03.147644 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10e3ae01-e0e2-43e2-bb04-56dda152ea83-config-volume\") pod \"coredns-668d6bf9bc-2vrpt\" (UID: \"10e3ae01-e0e2-43e2-bb04-56dda152ea83\") " pod="kube-system/coredns-668d6bf9bc-2vrpt" Sep 16 04:59:03.149017 kubelet[3168]: I0916 04:59:03.147670 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcdv\" (UniqueName: \"kubernetes.io/projected/8bb5af6f-a661-4521-9532-6371a28b796e-kube-api-access-ffcdv\") pod \"whisker-776bd7685f-gch6g\" (UID: \"8bb5af6f-a661-4521-9532-6371a28b796e\") " pod="calico-system/whisker-776bd7685f-gch6g" Sep 16 04:59:03.149217 kubelet[3168]: I0916 04:59:03.147693 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvbr\" (UniqueName: \"kubernetes.io/projected/d189562f-463a-410b-8021-6ce20324af99-kube-api-access-8qvbr\") pod \"calico-apiserver-5d86454cc5-clxmf\" (UID: \"d189562f-463a-410b-8021-6ce20324af99\") " pod="calico-apiserver/calico-apiserver-5d86454cc5-clxmf" Sep 16 04:59:03.149217 kubelet[3168]: I0916 04:59:03.147713 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5c57cb5-d83b-4987-b370-b0775dc8ba8b-calico-apiserver-certs\") pod \"calico-apiserver-5d86454cc5-22sb8\" (UID: \"d5c57cb5-d83b-4987-b370-b0775dc8ba8b\") " pod="calico-apiserver/calico-apiserver-5d86454cc5-22sb8" Sep 16 04:59:03.149217 kubelet[3168]: I0916 04:59:03.147736 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdwmw\" (UniqueName: \"kubernetes.io/projected/10e3ae01-e0e2-43e2-bb04-56dda152ea83-kube-api-access-kdwmw\") pod \"coredns-668d6bf9bc-2vrpt\" (UID: \"10e3ae01-e0e2-43e2-bb04-56dda152ea83\") " pod="kube-system/coredns-668d6bf9bc-2vrpt" Sep 16 04:59:03.149217 kubelet[3168]: I0916 04:59:03.147759 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd62246-8f86-4b8d-8663-ee4d91537759-tigera-ca-bundle\") pod \"calico-kube-controllers-5864779d58-m7sl2\" (UID: \"cbd62246-8f86-4b8d-8663-ee4d91537759\") " pod="calico-system/calico-kube-controllers-5864779d58-m7sl2" Sep 16 04:59:03.149217 kubelet[3168]: I0916 04:59:03.147782 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bb5af6f-a661-4521-9532-6371a28b796e-whisker-ca-bundle\") pod \"whisker-776bd7685f-gch6g\" (UID: \"8bb5af6f-a661-4521-9532-6371a28b796e\") " pod="calico-system/whisker-776bd7685f-gch6g" Sep 16 04:59:03.149347 kubelet[3168]: I0916 04:59:03.147807 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39eb9d2-7f98-45ea-8394-a2a170878bca-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-5ns66\" (UID: \"c39eb9d2-7f98-45ea-8394-a2a170878bca\") " pod="calico-system/goldmane-54d579b49d-5ns66" Sep 16 04:59:03.149347 kubelet[3168]: I0916 04:59:03.147829 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d189562f-463a-410b-8021-6ce20324af99-calico-apiserver-certs\") pod \"calico-apiserver-5d86454cc5-clxmf\" (UID: \"d189562f-463a-410b-8021-6ce20324af99\") " pod="calico-apiserver/calico-apiserver-5d86454cc5-clxmf" Sep 16 04:59:03.149347 kubelet[3168]: I0916 04:59:03.147864 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33e9df7e-07bb-4b00-8672-4a43dc40d07a-config-volume\") pod \"coredns-668d6bf9bc-fwjps\" (UID: \"33e9df7e-07bb-4b00-8672-4a43dc40d07a\") " pod="kube-system/coredns-668d6bf9bc-fwjps" Sep 16 04:59:03.149347 kubelet[3168]: I0916 04:59:03.147887 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w669t\" (UniqueName: \"kubernetes.io/projected/33e9df7e-07bb-4b00-8672-4a43dc40d07a-kube-api-access-w669t\") pod \"coredns-668d6bf9bc-fwjps\" (UID: \"33e9df7e-07bb-4b00-8672-4a43dc40d07a\") " pod="kube-system/coredns-668d6bf9bc-fwjps" Sep 16 04:59:03.149347 kubelet[3168]: I0916 04:59:03.147914 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9htpc\" (UniqueName: \"kubernetes.io/projected/cbd62246-8f86-4b8d-8663-ee4d91537759-kube-api-access-9htpc\") pod \"calico-kube-controllers-5864779d58-m7sl2\" (UID: \"cbd62246-8f86-4b8d-8663-ee4d91537759\") " pod="calico-system/calico-kube-controllers-5864779d58-m7sl2" Sep 16 04:59:03.149479 kubelet[3168]: I0916 04:59:03.147944 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8bb5af6f-a661-4521-9532-6371a28b796e-whisker-backend-key-pair\") pod \"whisker-776bd7685f-gch6g\" (UID: \"8bb5af6f-a661-4521-9532-6371a28b796e\") " pod="calico-system/whisker-776bd7685f-gch6g" Sep 16 04:59:03.149479 kubelet[3168]: I0916 04:59:03.147968 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knmmj\" (UniqueName: \"kubernetes.io/projected/d5c57cb5-d83b-4987-b370-b0775dc8ba8b-kube-api-access-knmmj\") pod \"calico-apiserver-5d86454cc5-22sb8\" (UID: \"d5c57cb5-d83b-4987-b370-b0775dc8ba8b\") " pod="calico-apiserver/calico-apiserver-5d86454cc5-22sb8" Sep 16 04:59:03.370410 containerd[1722]: time="2025-09-16T04:59:03.370371768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fwjps,Uid:33e9df7e-07bb-4b00-8672-4a43dc40d07a,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:03.386919 containerd[1722]: time="2025-09-16T04:59:03.386890614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vrpt,Uid:10e3ae01-e0e2-43e2-bb04-56dda152ea83,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:03.394496 containerd[1722]: time="2025-09-16T04:59:03.394457486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86454cc5-22sb8,Uid:d5c57cb5-d83b-4987-b370-b0775dc8ba8b,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:59:03.400990 containerd[1722]: time="2025-09-16T04:59:03.400962443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5864779d58-m7sl2,Uid:cbd62246-8f86-4b8d-8663-ee4d91537759,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:03.414631 containerd[1722]: time="2025-09-16T04:59:03.414606109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5ns66,Uid:c39eb9d2-7f98-45ea-8394-a2a170878bca,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:03.421082 containerd[1722]: time="2025-09-16T04:59:03.421047687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86454cc5-clxmf,Uid:d189562f-463a-410b-8021-6ce20324af99,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:59:03.430574 containerd[1722]: time="2025-09-16T04:59:03.430530986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-776bd7685f-gch6g,Uid:8bb5af6f-a661-4521-9532-6371a28b796e,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:04.797083 systemd[1]: Created slice kubepods-besteffort-pod8024408a_1213_49b8_a3b9_01ab011892e6.slice - libcontainer container kubepods-besteffort-pod8024408a_1213_49b8_a3b9_01ab011892e6.slice. Sep 16 04:59:04.799426 containerd[1722]: time="2025-09-16T04:59:04.799391199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f7dxh,Uid:8024408a-1213-49b8-a3b9-01ab011892e6,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:07.648948 containerd[1722]: time="2025-09-16T04:59:07.648886911Z" level=error msg="Failed to destroy network for sandbox \"ffc6889531574e443cf5f1502c34b2099e50bc046fe831d511084089ab280838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.654078 containerd[1722]: time="2025-09-16T04:59:07.654030218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fwjps,Uid:33e9df7e-07bb-4b00-8672-4a43dc40d07a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc6889531574e443cf5f1502c34b2099e50bc046fe831d511084089ab280838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.654436 kubelet[3168]: E0916 04:59:07.654356 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc6889531574e443cf5f1502c34b2099e50bc046fe831d511084089ab280838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.654709 kubelet[3168]: E0916 04:59:07.654442 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc6889531574e443cf5f1502c34b2099e50bc046fe831d511084089ab280838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fwjps" Sep 16 04:59:07.654709 kubelet[3168]: E0916 04:59:07.654463 3168 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc6889531574e443cf5f1502c34b2099e50bc046fe831d511084089ab280838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fwjps" Sep 16 04:59:07.654709 kubelet[3168]: E0916 04:59:07.654511 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fwjps_kube-system(33e9df7e-07bb-4b00-8672-4a43dc40d07a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fwjps_kube-system(33e9df7e-07bb-4b00-8672-4a43dc40d07a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffc6889531574e443cf5f1502c34b2099e50bc046fe831d511084089ab280838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fwjps" podUID="33e9df7e-07bb-4b00-8672-4a43dc40d07a" Sep 16 04:59:07.656544 containerd[1722]: time="2025-09-16T04:59:07.656463072Z" level=error msg="Failed to destroy network for sandbox \"24580620e0dae3d09c0f4ee10de5d6fc0521063105f068cbfc07727675edb1bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.658848 containerd[1722]: time="2025-09-16T04:59:07.658816478Z" level=error msg="Failed to destroy network for sandbox \"b49d5e3a5a6ffa38112e76c9d93d70ef96f877071825e1f89eec543058a76314\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.660718 containerd[1722]: time="2025-09-16T04:59:07.660558340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86454cc5-22sb8,Uid:d5c57cb5-d83b-4987-b370-b0775dc8ba8b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24580620e0dae3d09c0f4ee10de5d6fc0521063105f068cbfc07727675edb1bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.661470 kubelet[3168]: E0916 04:59:07.660944 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24580620e0dae3d09c0f4ee10de5d6fc0521063105f068cbfc07727675edb1bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.661470 kubelet[3168]: E0916 04:59:07.661113 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24580620e0dae3d09c0f4ee10de5d6fc0521063105f068cbfc07727675edb1bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d86454cc5-22sb8" Sep 16 04:59:07.661470 kubelet[3168]: E0916 04:59:07.661144 3168 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24580620e0dae3d09c0f4ee10de5d6fc0521063105f068cbfc07727675edb1bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d86454cc5-22sb8" Sep 16 04:59:07.661615 kubelet[3168]: E0916 04:59:07.661191 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d86454cc5-22sb8_calico-apiserver(d5c57cb5-d83b-4987-b370-b0775dc8ba8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d86454cc5-22sb8_calico-apiserver(d5c57cb5-d83b-4987-b370-b0775dc8ba8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24580620e0dae3d09c0f4ee10de5d6fc0521063105f068cbfc07727675edb1bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d86454cc5-22sb8" podUID="d5c57cb5-d83b-4987-b370-b0775dc8ba8b" Sep 16 04:59:07.666808 containerd[1722]: time="2025-09-16T04:59:07.666771841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vrpt,Uid:10e3ae01-e0e2-43e2-bb04-56dda152ea83,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b49d5e3a5a6ffa38112e76c9d93d70ef96f877071825e1f89eec543058a76314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.667277 kubelet[3168]: E0916 04:59:07.667253 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b49d5e3a5a6ffa38112e76c9d93d70ef96f877071825e1f89eec543058a76314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.667405 kubelet[3168]: E0916 04:59:07.667391 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b49d5e3a5a6ffa38112e76c9d93d70ef96f877071825e1f89eec543058a76314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2vrpt" Sep 16 04:59:07.667478 kubelet[3168]: E0916 04:59:07.667467 3168 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b49d5e3a5a6ffa38112e76c9d93d70ef96f877071825e1f89eec543058a76314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2vrpt" Sep 16 04:59:07.667586 kubelet[3168]: E0916 04:59:07.667567 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2vrpt_kube-system(10e3ae01-e0e2-43e2-bb04-56dda152ea83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2vrpt_kube-system(10e3ae01-e0e2-43e2-bb04-56dda152ea83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b49d5e3a5a6ffa38112e76c9d93d70ef96f877071825e1f89eec543058a76314\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2vrpt" podUID="10e3ae01-e0e2-43e2-bb04-56dda152ea83" Sep 16 04:59:07.680428 containerd[1722]: time="2025-09-16T04:59:07.680398414Z" level=error msg="Failed to destroy network for sandbox \"c5f0bf67371f7b2bed0401fa1e16318558251186270c805644e3db37d44b3a14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.683827 containerd[1722]: time="2025-09-16T04:59:07.683785539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-776bd7685f-gch6g,Uid:8bb5af6f-a661-4521-9532-6371a28b796e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f0bf67371f7b2bed0401fa1e16318558251186270c805644e3db37d44b3a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.684743 kubelet[3168]: E0916 04:59:07.684070 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f0bf67371f7b2bed0401fa1e16318558251186270c805644e3db37d44b3a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.684743 kubelet[3168]: E0916 04:59:07.684107 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f0bf67371f7b2bed0401fa1e16318558251186270c805644e3db37d44b3a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-776bd7685f-gch6g" Sep 16 04:59:07.684743 kubelet[3168]: E0916 04:59:07.684127 3168 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f0bf67371f7b2bed0401fa1e16318558251186270c805644e3db37d44b3a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-776bd7685f-gch6g" Sep 16 04:59:07.684876 kubelet[3168]: E0916 04:59:07.684161 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-776bd7685f-gch6g_calico-system(8bb5af6f-a661-4521-9532-6371a28b796e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-776bd7685f-gch6g_calico-system(8bb5af6f-a661-4521-9532-6371a28b796e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5f0bf67371f7b2bed0401fa1e16318558251186270c805644e3db37d44b3a14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-776bd7685f-gch6g" podUID="8bb5af6f-a661-4521-9532-6371a28b796e" Sep 16 04:59:07.688369 containerd[1722]: time="2025-09-16T04:59:07.688339856Z" level=error msg="Failed to destroy network for sandbox \"70a1f302c2ac951f882ac13eec40157764111afa5d605a8d1fc77bf153a1b656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.693272 containerd[1722]: time="2025-09-16T04:59:07.693216507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5864779d58-m7sl2,Uid:cbd62246-8f86-4b8d-8663-ee4d91537759,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70a1f302c2ac951f882ac13eec40157764111afa5d605a8d1fc77bf153a1b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.693425 kubelet[3168]: E0916 04:59:07.693396 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70a1f302c2ac951f882ac13eec40157764111afa5d605a8d1fc77bf153a1b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.693472 kubelet[3168]: E0916 04:59:07.693439 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70a1f302c2ac951f882ac13eec40157764111afa5d605a8d1fc77bf153a1b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5864779d58-m7sl2" Sep 16 04:59:07.693472 kubelet[3168]: E0916 04:59:07.693458 3168 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70a1f302c2ac951f882ac13eec40157764111afa5d605a8d1fc77bf153a1b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5864779d58-m7sl2" Sep 16 04:59:07.693573 containerd[1722]: time="2025-09-16T04:59:07.693548284Z" level=error msg="Failed to destroy network for sandbox \"3ba3cb3ea9a69b4d9b854e08333ae8eb39142ba70f213bdc46097486ae320fee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.693666 kubelet[3168]: E0916 04:59:07.693646 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5864779d58-m7sl2_calico-system(cbd62246-8f86-4b8d-8663-ee4d91537759)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5864779d58-m7sl2_calico-system(cbd62246-8f86-4b8d-8663-ee4d91537759)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70a1f302c2ac951f882ac13eec40157764111afa5d605a8d1fc77bf153a1b656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5864779d58-m7sl2" podUID="cbd62246-8f86-4b8d-8663-ee4d91537759" Sep 16 04:59:07.694105 containerd[1722]: time="2025-09-16T04:59:07.694067063Z" level=error msg="Failed to destroy network for sandbox \"a3baff35675aba342020585e6b7a526d36122d8fe1cde9a59ce4d44f65d4dff9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.695945 containerd[1722]: time="2025-09-16T04:59:07.695914565Z" level=error msg="Failed to destroy network for sandbox \"438fa0779091fc8dff6ee280507d1a6330ea6b8ccb701461d949eb563079e2a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.696749 containerd[1722]: time="2025-09-16T04:59:07.696726436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86454cc5-clxmf,Uid:d189562f-463a-410b-8021-6ce20324af99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ba3cb3ea9a69b4d9b854e08333ae8eb39142ba70f213bdc46097486ae320fee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.697065 kubelet[3168]: E0916 04:59:07.696935 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ba3cb3ea9a69b4d9b854e08333ae8eb39142ba70f213bdc46097486ae320fee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.697122 kubelet[3168]: E0916 04:59:07.697093 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ba3cb3ea9a69b4d9b854e08333ae8eb39142ba70f213bdc46097486ae320fee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d86454cc5-clxmf" Sep 16 04:59:07.697122 kubelet[3168]: E0916 04:59:07.697115 3168 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ba3cb3ea9a69b4d9b854e08333ae8eb39142ba70f213bdc46097486ae320fee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d86454cc5-clxmf" Sep 16 04:59:07.697337 kubelet[3168]: E0916 04:59:07.697150 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d86454cc5-clxmf_calico-apiserver(d189562f-463a-410b-8021-6ce20324af99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d86454cc5-clxmf_calico-apiserver(d189562f-463a-410b-8021-6ce20324af99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ba3cb3ea9a69b4d9b854e08333ae8eb39142ba70f213bdc46097486ae320fee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d86454cc5-clxmf" podUID="d189562f-463a-410b-8021-6ce20324af99" Sep 16 04:59:07.699910 containerd[1722]: time="2025-09-16T04:59:07.699833527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f7dxh,Uid:8024408a-1213-49b8-a3b9-01ab011892e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3baff35675aba342020585e6b7a526d36122d8fe1cde9a59ce4d44f65d4dff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.700057 kubelet[3168]: E0916 04:59:07.700031 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3baff35675aba342020585e6b7a526d36122d8fe1cde9a59ce4d44f65d4dff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.700117 kubelet[3168]: E0916 04:59:07.700069 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3baff35675aba342020585e6b7a526d36122d8fe1cde9a59ce4d44f65d4dff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f7dxh" Sep 16 04:59:07.700117 kubelet[3168]: E0916 04:59:07.700100 3168 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3baff35675aba342020585e6b7a526d36122d8fe1cde9a59ce4d44f65d4dff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f7dxh" Sep 16 04:59:07.700273 kubelet[3168]: E0916 04:59:07.700138 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f7dxh_calico-system(8024408a-1213-49b8-a3b9-01ab011892e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f7dxh_calico-system(8024408a-1213-49b8-a3b9-01ab011892e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3baff35675aba342020585e6b7a526d36122d8fe1cde9a59ce4d44f65d4dff9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f7dxh" podUID="8024408a-1213-49b8-a3b9-01ab011892e6" Sep 16 04:59:07.702346 containerd[1722]: time="2025-09-16T04:59:07.702314811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5ns66,Uid:c39eb9d2-7f98-45ea-8394-a2a170878bca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fa0779091fc8dff6ee280507d1a6330ea6b8ccb701461d949eb563079e2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.702522 kubelet[3168]: E0916 04:59:07.702498 3168 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fa0779091fc8dff6ee280507d1a6330ea6b8ccb701461d949eb563079e2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:59:07.702567 kubelet[3168]: E0916 04:59:07.702533 3168 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fa0779091fc8dff6ee280507d1a6330ea6b8ccb701461d949eb563079e2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5ns66" Sep 16 04:59:07.702601 kubelet[3168]: E0916 04:59:07.702550 3168 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fa0779091fc8dff6ee280507d1a6330ea6b8ccb701461d949eb563079e2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5ns66" Sep 16 04:59:07.702626 kubelet[3168]: E0916 04:59:07.702601 3168 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-5ns66_calico-system(c39eb9d2-7f98-45ea-8394-a2a170878bca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-5ns66_calico-system(c39eb9d2-7f98-45ea-8394-a2a170878bca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"438fa0779091fc8dff6ee280507d1a6330ea6b8ccb701461d949eb563079e2a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5ns66" podUID="c39eb9d2-7f98-45ea-8394-a2a170878bca" Sep 16 04:59:07.925317 containerd[1722]: time="2025-09-16T04:59:07.924299527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 04:59:08.323479 systemd[1]: run-netns-cni\x2d537badec\x2df218\x2d3e17\x2dd42e\x2d9517acc8f8b4.mount: Deactivated successfully. Sep 16 04:59:08.324033 systemd[1]: run-netns-cni\x2dbc85c7aa\x2dc460\x2d4758\x2dbdcd\x2d9b8060c19d71.mount: Deactivated successfully. Sep 16 04:59:08.324098 systemd[1]: run-netns-cni\x2d04e5a167\x2d0894\x2db2d3\x2d6742\x2d49620a9fce95.mount: Deactivated successfully. Sep 16 04:59:08.324148 systemd[1]: run-netns-cni\x2d22f0bf4f\x2d28cd\x2d2441\x2dfb6f\x2d23b824dc6281.mount: Deactivated successfully. Sep 16 04:59:08.324200 systemd[1]: run-netns-cni\x2df7e17a18\x2d462d\x2deec7\x2dfda9\x2d4b4a9686272e.mount: Deactivated successfully. Sep 16 04:59:08.324250 systemd[1]: run-netns-cni\x2d6a780e16\x2d9985\x2d1749\x2daca5\x2df9b95cad0790.mount: Deactivated successfully. Sep 16 04:59:08.324304 systemd[1]: run-netns-cni\x2d554629cc\x2d1476\x2dacbe\x2d8b89\x2d4c942b66bdad.mount: Deactivated successfully. Sep 16 04:59:12.959684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900484541.mount: Deactivated successfully. Sep 16 04:59:12.982924 containerd[1722]: time="2025-09-16T04:59:12.982880864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:12.985132 containerd[1722]: time="2025-09-16T04:59:12.985102129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 16 04:59:12.990010 containerd[1722]: time="2025-09-16T04:59:12.988182525Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:12.993948 containerd[1722]: time="2025-09-16T04:59:12.993911108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:12.995199 containerd[1722]: time="2025-09-16T04:59:12.995165954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.070829417s" Sep 16 04:59:12.995289 containerd[1722]: time="2025-09-16T04:59:12.995203937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 16 04:59:13.005557 containerd[1722]: time="2025-09-16T04:59:13.005357124Z" level=info msg="CreateContainer within sandbox \"d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 04:59:13.032020 containerd[1722]: time="2025-09-16T04:59:13.029745157Z" level=info msg="Container efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:13.049414 containerd[1722]: time="2025-09-16T04:59:13.049385845Z" level=info msg="CreateContainer within sandbox \"d42aa0171c8be1326269e061bef28b581797146d52c2e22c5cb5f18ad067d7bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df\"" Sep 16 04:59:13.051523 containerd[1722]: time="2025-09-16T04:59:13.050108932Z" level=info msg="StartContainer for \"efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df\"" Sep 16 04:59:13.051832 containerd[1722]: time="2025-09-16T04:59:13.051804306Z" level=info msg="connecting to shim efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df" address="unix:///run/containerd/s/760a6db2185b92df0b8aa148d55f15b5ed7183180227cb3a6a2c653f26ff4b7a" protocol=ttrpc version=3 Sep 16 04:59:13.068125 systemd[1]: Started cri-containerd-efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df.scope - libcontainer container efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df. Sep 16 04:59:13.105253 containerd[1722]: time="2025-09-16T04:59:13.105225576Z" level=info msg="StartContainer for \"efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df\" returns successfully" Sep 16 04:59:13.440592 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 04:59:13.440698 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 04:59:13.704223 kubelet[3168]: I0916 04:59:13.704101 3168 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8bb5af6f-a661-4521-9532-6371a28b796e-whisker-backend-key-pair\") pod \"8bb5af6f-a661-4521-9532-6371a28b796e\" (UID: \"8bb5af6f-a661-4521-9532-6371a28b796e\") " Sep 16 04:59:13.704223 kubelet[3168]: I0916 04:59:13.704145 3168 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffcdv\" (UniqueName: \"kubernetes.io/projected/8bb5af6f-a661-4521-9532-6371a28b796e-kube-api-access-ffcdv\") pod \"8bb5af6f-a661-4521-9532-6371a28b796e\" (UID: \"8bb5af6f-a661-4521-9532-6371a28b796e\") " Sep 16 04:59:13.705238 kubelet[3168]: I0916 04:59:13.704391 3168 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bb5af6f-a661-4521-9532-6371a28b796e-whisker-ca-bundle\") pod \"8bb5af6f-a661-4521-9532-6371a28b796e\" (UID: \"8bb5af6f-a661-4521-9532-6371a28b796e\") " Sep 16 04:59:13.705238 kubelet[3168]: I0916 04:59:13.704674 3168 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb5af6f-a661-4521-9532-6371a28b796e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8bb5af6f-a661-4521-9532-6371a28b796e" (UID: "8bb5af6f-a661-4521-9532-6371a28b796e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:59:13.707220 kubelet[3168]: I0916 04:59:13.707190 3168 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb5af6f-a661-4521-9532-6371a28b796e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8bb5af6f-a661-4521-9532-6371a28b796e" (UID: "8bb5af6f-a661-4521-9532-6371a28b796e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:59:13.707444 kubelet[3168]: I0916 04:59:13.707426 3168 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb5af6f-a661-4521-9532-6371a28b796e-kube-api-access-ffcdv" (OuterVolumeSpecName: "kube-api-access-ffcdv") pod "8bb5af6f-a661-4521-9532-6371a28b796e" (UID: "8bb5af6f-a661-4521-9532-6371a28b796e"). InnerVolumeSpecName "kube-api-access-ffcdv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:59:13.798317 systemd[1]: Removed slice kubepods-besteffort-pod8bb5af6f_a661_4521_9532_6371a28b796e.slice - libcontainer container kubepods-besteffort-pod8bb5af6f_a661_4521_9532_6371a28b796e.slice. Sep 16 04:59:13.805577 kubelet[3168]: I0916 04:59:13.805542 3168 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8bb5af6f-a661-4521-9532-6371a28b796e-whisker-backend-key-pair\") on node \"ci-4459.0.0-n-f9a9538521\" DevicePath \"\"" Sep 16 04:59:13.805577 kubelet[3168]: I0916 04:59:13.805575 3168 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffcdv\" (UniqueName: \"kubernetes.io/projected/8bb5af6f-a661-4521-9532-6371a28b796e-kube-api-access-ffcdv\") on node \"ci-4459.0.0-n-f9a9538521\" DevicePath \"\"" Sep 16 04:59:13.805696 kubelet[3168]: I0916 04:59:13.805587 3168 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bb5af6f-a661-4521-9532-6371a28b796e-whisker-ca-bundle\") on node \"ci-4459.0.0-n-f9a9538521\" DevicePath \"\"" Sep 16 04:59:13.953263 kubelet[3168]: I0916 04:59:13.953071 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hb4vn" podStartSLOduration=2.327174792 podStartE2EDuration="29.953053817s" podCreationTimestamp="2025-09-16 04:58:44 +0000 UTC" firstStartedPulling="2025-09-16 04:58:45.369804126 +0000 UTC m=+19.670985247" lastFinishedPulling="2025-09-16 04:59:12.99568315 +0000 UTC m=+47.296864272" observedRunningTime="2025-09-16 04:59:13.951904551 +0000 UTC m=+48.253085682" watchObservedRunningTime="2025-09-16 04:59:13.953053817 +0000 UTC m=+48.254235022" Sep 16 04:59:13.960389 systemd[1]: var-lib-kubelet-pods-8bb5af6f\x2da661\x2d4521\x2d9532\x2d6371a28b796e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffcdv.mount: Deactivated successfully. Sep 16 04:59:13.960478 systemd[1]: var-lib-kubelet-pods-8bb5af6f\x2da661\x2d4521\x2d9532\x2d6371a28b796e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 04:59:14.015358 systemd[1]: Created slice kubepods-besteffort-pod5ebc6646_6112_498b_89db_a93a0cde294e.slice - libcontainer container kubepods-besteffort-pod5ebc6646_6112_498b_89db_a93a0cde294e.slice. Sep 16 04:59:14.108212 kubelet[3168]: I0916 04:59:14.108187 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ebc6646-6112-498b-89db-a93a0cde294e-whisker-ca-bundle\") pod \"whisker-78bfbd7b86-6c2n7\" (UID: \"5ebc6646-6112-498b-89db-a93a0cde294e\") " pod="calico-system/whisker-78bfbd7b86-6c2n7" Sep 16 04:59:14.108212 kubelet[3168]: I0916 04:59:14.108220 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ebc6646-6112-498b-89db-a93a0cde294e-whisker-backend-key-pair\") pod \"whisker-78bfbd7b86-6c2n7\" (UID: \"5ebc6646-6112-498b-89db-a93a0cde294e\") " pod="calico-system/whisker-78bfbd7b86-6c2n7" Sep 16 04:59:14.108377 kubelet[3168]: I0916 04:59:14.108239 3168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxv9\" (UniqueName: \"kubernetes.io/projected/5ebc6646-6112-498b-89db-a93a0cde294e-kube-api-access-fjxv9\") pod \"whisker-78bfbd7b86-6c2n7\" (UID: \"5ebc6646-6112-498b-89db-a93a0cde294e\") " pod="calico-system/whisker-78bfbd7b86-6c2n7" Sep 16 04:59:14.318700 containerd[1722]: time="2025-09-16T04:59:14.318604801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78bfbd7b86-6c2n7,Uid:5ebc6646-6112-498b-89db-a93a0cde294e,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:14.443094 systemd-networkd[1578]: cali85c89bc29a8: Link UP Sep 16 04:59:14.443876 systemd-networkd[1578]: cali85c89bc29a8: Gained carrier Sep 16 04:59:14.461123 containerd[1722]: 2025-09-16 04:59:14.343 [INFO][4241] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:59:14.461123 containerd[1722]: 2025-09-16 04:59:14.350 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0 whisker-78bfbd7b86- calico-system 5ebc6646-6112-498b-89db-a93a0cde294e 885 0 2025-09-16 04:59:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78bfbd7b86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.0.0-n-f9a9538521 whisker-78bfbd7b86-6c2n7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali85c89bc29a8 [] [] }} ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Namespace="calico-system" Pod="whisker-78bfbd7b86-6c2n7" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-" Sep 16 04:59:14.461123 containerd[1722]: 2025-09-16 04:59:14.350 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Namespace="calico-system" Pod="whisker-78bfbd7b86-6c2n7" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" Sep 16 04:59:14.461123 containerd[1722]: 2025-09-16 04:59:14.370 [INFO][4253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" HandleID="k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Workload="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.371 [INFO][4253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" HandleID="k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Workload="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-f9a9538521", "pod":"whisker-78bfbd7b86-6c2n7", "timestamp":"2025-09-16 04:59:14.370952653 +0000 UTC"}, Hostname:"ci-4459.0.0-n-f9a9538521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.371 [INFO][4253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.371 [INFO][4253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.371 [INFO][4253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-f9a9538521' Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.376 [INFO][4253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.379 [INFO][4253] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.384 [INFO][4253] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.386 [INFO][4253] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461333 containerd[1722]: 2025-09-16 04:59:14.388 [INFO][4253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461551 containerd[1722]: 2025-09-16 04:59:14.388 [INFO][4253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461551 containerd[1722]: 2025-09-16 04:59:14.389 [INFO][4253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485 Sep 16 04:59:14.461551 containerd[1722]: 2025-09-16 04:59:14.395 [INFO][4253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461551 containerd[1722]: 2025-09-16 04:59:14.401 [INFO][4253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.129/26] block=192.168.101.128/26 handle="k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461551 containerd[1722]: 2025-09-16 04:59:14.402 [INFO][4253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.129/26] handle="k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:14.461551 containerd[1722]: 2025-09-16 04:59:14.402 [INFO][4253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:14.461551 containerd[1722]: 2025-09-16 04:59:14.402 [INFO][4253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.129/26] IPv6=[] ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" HandleID="k8s-pod-network.3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Workload="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" Sep 16 04:59:14.461702 containerd[1722]: 2025-09-16 04:59:14.404 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Namespace="calico-system" Pod="whisker-78bfbd7b86-6c2n7" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0", GenerateName:"whisker-78bfbd7b86-", Namespace:"calico-system", SelfLink:"", UID:"5ebc6646-6112-498b-89db-a93a0cde294e", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78bfbd7b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"", Pod:"whisker-78bfbd7b86-6c2n7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali85c89bc29a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:14.461702 containerd[1722]: 2025-09-16 04:59:14.404 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.129/32] ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Namespace="calico-system" Pod="whisker-78bfbd7b86-6c2n7" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" Sep 16 04:59:14.461796 containerd[1722]: 2025-09-16 04:59:14.405 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85c89bc29a8 ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Namespace="calico-system" Pod="whisker-78bfbd7b86-6c2n7" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" Sep 16 04:59:14.461796 containerd[1722]: 2025-09-16 04:59:14.444 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Namespace="calico-system" Pod="whisker-78bfbd7b86-6c2n7" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" Sep 16 04:59:14.461852 containerd[1722]: 2025-09-16 04:59:14.444 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Namespace="calico-system" Pod="whisker-78bfbd7b86-6c2n7" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0", GenerateName:"whisker-78bfbd7b86-", Namespace:"calico-system", SelfLink:"", UID:"5ebc6646-6112-498b-89db-a93a0cde294e", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 59, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78bfbd7b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485", Pod:"whisker-78bfbd7b86-6c2n7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali85c89bc29a8", MAC:"92:de:f1:bf:0c:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:14.461914 containerd[1722]: 2025-09-16 04:59:14.458 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" Namespace="calico-system" Pod="whisker-78bfbd7b86-6c2n7" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-whisker--78bfbd7b86--6c2n7-eth0" Sep 16 04:59:14.498875 containerd[1722]: time="2025-09-16T04:59:14.498409570Z" level=info msg="connecting to shim 3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485" address="unix:///run/containerd/s/d4564b3573133b2a8baec45dd72c07cb0de944f5bba0a4e7610dcd24b0b06b16" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:14.524160 systemd[1]: Started cri-containerd-3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485.scope - libcontainer container 3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485. Sep 16 04:59:14.562213 containerd[1722]: time="2025-09-16T04:59:14.562185226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78bfbd7b86-6c2n7,Uid:5ebc6646-6112-498b-89db-a93a0cde294e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485\"" Sep 16 04:59:14.563407 containerd[1722]: time="2025-09-16T04:59:14.563379662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 04:59:15.331153 systemd-networkd[1578]: vxlan.calico: Link UP Sep 16 04:59:15.331161 systemd-networkd[1578]: vxlan.calico: Gained carrier Sep 16 04:59:15.800519 kubelet[3168]: I0916 04:59:15.800474 3168 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb5af6f-a661-4521-9532-6371a28b796e" path="/var/lib/kubelet/pods/8bb5af6f-a661-4521-9532-6371a28b796e/volumes" Sep 16 04:59:15.809153 systemd-networkd[1578]: cali85c89bc29a8: Gained IPv6LL Sep 16 04:59:15.894305 containerd[1722]: time="2025-09-16T04:59:15.894271482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:15.897457 containerd[1722]: time="2025-09-16T04:59:15.897325115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 16 04:59:15.900607 containerd[1722]: time="2025-09-16T04:59:15.900574466Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:15.904621 containerd[1722]: time="2025-09-16T04:59:15.904207720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:15.904621 containerd[1722]: time="2025-09-16T04:59:15.904529870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.34102947s" Sep 16 04:59:15.904621 containerd[1722]: time="2025-09-16T04:59:15.904557277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 16 04:59:15.906724 containerd[1722]: time="2025-09-16T04:59:15.906691814Z" level=info msg="CreateContainer within sandbox \"3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 04:59:15.923130 containerd[1722]: time="2025-09-16T04:59:15.923102120Z" level=info msg="Container 45560cfd9366916b2e137b3c2f8026c1d2899264b1701a9f0bfab302a9b8944a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:15.940062 containerd[1722]: time="2025-09-16T04:59:15.940037372Z" level=info msg="CreateContainer within sandbox \"3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"45560cfd9366916b2e137b3c2f8026c1d2899264b1701a9f0bfab302a9b8944a\"" Sep 16 04:59:15.941408 containerd[1722]: time="2025-09-16T04:59:15.940675422Z" level=info msg="StartContainer for \"45560cfd9366916b2e137b3c2f8026c1d2899264b1701a9f0bfab302a9b8944a\"" Sep 16 04:59:15.941734 containerd[1722]: time="2025-09-16T04:59:15.941698186Z" level=info msg="connecting to shim 45560cfd9366916b2e137b3c2f8026c1d2899264b1701a9f0bfab302a9b8944a" address="unix:///run/containerd/s/d4564b3573133b2a8baec45dd72c07cb0de944f5bba0a4e7610dcd24b0b06b16" protocol=ttrpc version=3 Sep 16 04:59:15.966134 systemd[1]: Started cri-containerd-45560cfd9366916b2e137b3c2f8026c1d2899264b1701a9f0bfab302a9b8944a.scope - libcontainer container 45560cfd9366916b2e137b3c2f8026c1d2899264b1701a9f0bfab302a9b8944a. Sep 16 04:59:16.010062 containerd[1722]: time="2025-09-16T04:59:16.010026157Z" level=info msg="StartContainer for \"45560cfd9366916b2e137b3c2f8026c1d2899264b1701a9f0bfab302a9b8944a\" returns successfully" Sep 16 04:59:16.012885 containerd[1722]: time="2025-09-16T04:59:16.012859616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 04:59:17.025139 systemd-networkd[1578]: vxlan.calico: Gained IPv6LL Sep 16 04:59:17.931595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422645812.mount: Deactivated successfully. Sep 16 04:59:17.985003 containerd[1722]: time="2025-09-16T04:59:17.984944827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:17.987585 containerd[1722]: time="2025-09-16T04:59:17.987548939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 16 04:59:17.991226 containerd[1722]: time="2025-09-16T04:59:17.991169566Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:17.995180 containerd[1722]: time="2025-09-16T04:59:17.995147297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:17.995715 containerd[1722]: time="2025-09-16T04:59:17.995680348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 1.982790136s" Sep 16 04:59:17.995773 containerd[1722]: time="2025-09-16T04:59:17.995736903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 16 04:59:17.998024 containerd[1722]: time="2025-09-16T04:59:17.997724397Z" level=info msg="CreateContainer within sandbox \"3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 04:59:18.018441 containerd[1722]: time="2025-09-16T04:59:18.018411286Z" level=info msg="Container a85509c359bb5aa45ece7253c2033e7d85d8e9b1c228e1b564446226a392d4c3: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:18.035198 containerd[1722]: time="2025-09-16T04:59:18.035174355Z" level=info msg="CreateContainer within sandbox \"3402caa4dc3c65d04d8b96519560d215e1da3e729a99fb378bf466116596b485\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a85509c359bb5aa45ece7253c2033e7d85d8e9b1c228e1b564446226a392d4c3\"" Sep 16 04:59:18.035681 containerd[1722]: time="2025-09-16T04:59:18.035575941Z" level=info msg="StartContainer for \"a85509c359bb5aa45ece7253c2033e7d85d8e9b1c228e1b564446226a392d4c3\"" Sep 16 04:59:18.036944 containerd[1722]: time="2025-09-16T04:59:18.036912448Z" level=info msg="connecting to shim a85509c359bb5aa45ece7253c2033e7d85d8e9b1c228e1b564446226a392d4c3" address="unix:///run/containerd/s/d4564b3573133b2a8baec45dd72c07cb0de944f5bba0a4e7610dcd24b0b06b16" protocol=ttrpc version=3 Sep 16 04:59:18.059128 systemd[1]: Started cri-containerd-a85509c359bb5aa45ece7253c2033e7d85d8e9b1c228e1b564446226a392d4c3.scope - libcontainer container a85509c359bb5aa45ece7253c2033e7d85d8e9b1c228e1b564446226a392d4c3. Sep 16 04:59:18.143584 containerd[1722]: time="2025-09-16T04:59:18.143536699Z" level=info msg="StartContainer for \"a85509c359bb5aa45ece7253c2033e7d85d8e9b1c228e1b564446226a392d4c3\" returns successfully" Sep 16 04:59:18.793981 containerd[1722]: time="2025-09-16T04:59:18.793934715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86454cc5-clxmf,Uid:d189562f-463a-410b-8021-6ce20324af99,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:59:18.886636 systemd-networkd[1578]: cali5d41e13aac9: Link UP Sep 16 04:59:18.886838 systemd-networkd[1578]: cali5d41e13aac9: Gained carrier Sep 16 04:59:18.903084 containerd[1722]: 2025-09-16 04:59:18.829 [INFO][4587] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0 calico-apiserver-5d86454cc5- calico-apiserver d189562f-463a-410b-8021-6ce20324af99 816 0 2025-09-16 04:58:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d86454cc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-f9a9538521 calico-apiserver-5d86454cc5-clxmf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d41e13aac9 [] [] }} ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-clxmf" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-" Sep 16 04:59:18.903084 containerd[1722]: 2025-09-16 04:59:18.829 [INFO][4587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-clxmf" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" Sep 16 04:59:18.903084 containerd[1722]: 2025-09-16 04:59:18.852 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" HandleID="k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.853 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" HandleID="k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-f9a9538521", "pod":"calico-apiserver-5d86454cc5-clxmf", "timestamp":"2025-09-16 04:59:18.852954219 +0000 UTC"}, Hostname:"ci-4459.0.0-n-f9a9538521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.853 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.853 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.853 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-f9a9538521' Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.857 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.860 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.864 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.865 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903281 containerd[1722]: 2025-09-16 04:59:18.867 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903503 containerd[1722]: 2025-09-16 04:59:18.867 [INFO][4599] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903503 containerd[1722]: 2025-09-16 04:59:18.868 [INFO][4599] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9 Sep 16 04:59:18.903503 containerd[1722]: 2025-09-16 04:59:18.873 [INFO][4599] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903503 containerd[1722]: 2025-09-16 04:59:18.881 [INFO][4599] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.130/26] block=192.168.101.128/26 handle="k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903503 containerd[1722]: 2025-09-16 04:59:18.881 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.130/26] handle="k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:18.903503 containerd[1722]: 2025-09-16 04:59:18.881 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:18.903503 containerd[1722]: 2025-09-16 04:59:18.881 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.130/26] IPv6=[] ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" HandleID="k8s-pod-network.2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" Sep 16 04:59:18.903646 containerd[1722]: 2025-09-16 04:59:18.882 [INFO][4587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-clxmf" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0", GenerateName:"calico-apiserver-5d86454cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d189562f-463a-410b-8021-6ce20324af99", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d86454cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"", Pod:"calico-apiserver-5d86454cc5-clxmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d41e13aac9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:18.903706 containerd[1722]: 2025-09-16 04:59:18.882 [INFO][4587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.130/32] ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-clxmf" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" Sep 16 04:59:18.903706 containerd[1722]: 2025-09-16 04:59:18.882 [INFO][4587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d41e13aac9 ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-clxmf" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" Sep 16 04:59:18.903706 containerd[1722]: 2025-09-16 04:59:18.885 [INFO][4587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-clxmf" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" Sep 16 04:59:18.903801 containerd[1722]: 2025-09-16 04:59:18.885 [INFO][4587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-clxmf" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0", GenerateName:"calico-apiserver-5d86454cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d189562f-463a-410b-8021-6ce20324af99", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d86454cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9", Pod:"calico-apiserver-5d86454cc5-clxmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d41e13aac9", MAC:"4e:83:25:53:e0:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:18.903859 containerd[1722]: 2025-09-16 04:59:18.899 [INFO][4587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-clxmf" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--clxmf-eth0" Sep 16 04:59:18.951398 containerd[1722]: time="2025-09-16T04:59:18.951364991Z" level=info msg="connecting to shim 2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9" address="unix:///run/containerd/s/7d24851429ab757d3063659b8bff93f217f31897d4868c0bcefe8108b5e75a60" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:18.981165 kubelet[3168]: I0916 04:59:18.980670 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78bfbd7b86-6c2n7" podStartSLOduration=2.547384976 podStartE2EDuration="5.980653635s" podCreationTimestamp="2025-09-16 04:59:13 +0000 UTC" firstStartedPulling="2025-09-16 04:59:14.563138164 +0000 UTC m=+48.864319294" lastFinishedPulling="2025-09-16 04:59:17.996406838 +0000 UTC m=+52.297587953" observedRunningTime="2025-09-16 04:59:18.980048673 +0000 UTC m=+53.281229817" watchObservedRunningTime="2025-09-16 04:59:18.980653635 +0000 UTC m=+53.281834766" Sep 16 04:59:18.983714 systemd[1]: Started cri-containerd-2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9.scope - libcontainer container 2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9. Sep 16 04:59:19.036542 containerd[1722]: time="2025-09-16T04:59:19.036515178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86454cc5-clxmf,Uid:d189562f-463a-410b-8021-6ce20324af99,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9\"" Sep 16 04:59:19.037960 containerd[1722]: time="2025-09-16T04:59:19.037936296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:59:19.794834 containerd[1722]: time="2025-09-16T04:59:19.794548361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5ns66,Uid:c39eb9d2-7f98-45ea-8394-a2a170878bca,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:19.895420 systemd-networkd[1578]: cali154189e417a: Link UP Sep 16 04:59:19.897042 systemd-networkd[1578]: cali154189e417a: Gained carrier Sep 16 04:59:19.913221 containerd[1722]: 2025-09-16 04:59:19.830 [INFO][4664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0 goldmane-54d579b49d- calico-system c39eb9d2-7f98-45ea-8394-a2a170878bca 818 0 2025-09-16 04:58:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.0.0-n-f9a9538521 goldmane-54d579b49d-5ns66 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali154189e417a [] [] }} ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Namespace="calico-system" Pod="goldmane-54d579b49d-5ns66" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-" Sep 16 04:59:19.913221 containerd[1722]: 2025-09-16 04:59:19.830 [INFO][4664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Namespace="calico-system" Pod="goldmane-54d579b49d-5ns66" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" Sep 16 04:59:19.913221 containerd[1722]: 2025-09-16 04:59:19.856 [INFO][4675] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" HandleID="k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Workload="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.856 [INFO][4675] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" HandleID="k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Workload="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-f9a9538521", "pod":"goldmane-54d579b49d-5ns66", "timestamp":"2025-09-16 04:59:19.856410225 +0000 UTC"}, Hostname:"ci-4459.0.0-n-f9a9538521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.856 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.856 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.856 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-f9a9538521' Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.862 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.865 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.868 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.870 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913442 containerd[1722]: 2025-09-16 04:59:19.872 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913716 containerd[1722]: 2025-09-16 04:59:19.873 [INFO][4675] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913716 containerd[1722]: 2025-09-16 04:59:19.874 [INFO][4675] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b Sep 16 04:59:19.913716 containerd[1722]: 2025-09-16 04:59:19.883 [INFO][4675] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913716 containerd[1722]: 2025-09-16 04:59:19.889 [INFO][4675] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.131/26] block=192.168.101.128/26 handle="k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913716 containerd[1722]: 2025-09-16 04:59:19.889 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.131/26] handle="k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:19.913716 containerd[1722]: 2025-09-16 04:59:19.889 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:19.913716 containerd[1722]: 2025-09-16 04:59:19.889 [INFO][4675] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.131/26] IPv6=[] ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" HandleID="k8s-pod-network.10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Workload="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" Sep 16 04:59:19.913897 containerd[1722]: 2025-09-16 04:59:19.892 [INFO][4664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Namespace="calico-system" Pod="goldmane-54d579b49d-5ns66" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c39eb9d2-7f98-45ea-8394-a2a170878bca", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"", Pod:"goldmane-54d579b49d-5ns66", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali154189e417a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:19.914017 containerd[1722]: 2025-09-16 04:59:19.892 [INFO][4664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.131/32] ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Namespace="calico-system" Pod="goldmane-54d579b49d-5ns66" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" Sep 16 04:59:19.914017 containerd[1722]: 2025-09-16 04:59:19.892 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali154189e417a ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Namespace="calico-system" Pod="goldmane-54d579b49d-5ns66" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" Sep 16 04:59:19.914017 containerd[1722]: 2025-09-16 04:59:19.897 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Namespace="calico-system" Pod="goldmane-54d579b49d-5ns66" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" Sep 16 04:59:19.914109 containerd[1722]: 2025-09-16 04:59:19.898 [INFO][4664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Namespace="calico-system" Pod="goldmane-54d579b49d-5ns66" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c39eb9d2-7f98-45ea-8394-a2a170878bca", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b", Pod:"goldmane-54d579b49d-5ns66", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali154189e417a", MAC:"d6:13:ba:b6:ab:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:19.914182 containerd[1722]: 2025-09-16 04:59:19.910 [INFO][4664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" Namespace="calico-system" Pod="goldmane-54d579b49d-5ns66" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-goldmane--54d579b49d--5ns66-eth0" Sep 16 04:59:19.949484 containerd[1722]: time="2025-09-16T04:59:19.949383106Z" level=info msg="connecting to shim 10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b" address="unix:///run/containerd/s/c1e076549a14f4bbc31b62f45b68d7785257dd90e2c02bfb44d9ed5a6018920d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:19.974164 systemd[1]: Started cri-containerd-10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b.scope - libcontainer container 10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b. Sep 16 04:59:20.020892 containerd[1722]: time="2025-09-16T04:59:20.020863446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5ns66,Uid:c39eb9d2-7f98-45ea-8394-a2a170878bca,Namespace:calico-system,Attempt:0,} returns sandbox id \"10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b\"" Sep 16 04:59:20.674289 systemd-networkd[1578]: cali5d41e13aac9: Gained IPv6LL Sep 16 04:59:21.185147 systemd-networkd[1578]: cali154189e417a: Gained IPv6LL Sep 16 04:59:21.263169 containerd[1722]: time="2025-09-16T04:59:21.263134183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:21.265793 containerd[1722]: time="2025-09-16T04:59:21.265704784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 16 04:59:21.268602 containerd[1722]: time="2025-09-16T04:59:21.268579946Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:21.272567 containerd[1722]: time="2025-09-16T04:59:21.272521443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:21.273040 containerd[1722]: time="2025-09-16T04:59:21.272917222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.234951369s" Sep 16 04:59:21.273040 containerd[1722]: time="2025-09-16T04:59:21.272944561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:59:21.274403 containerd[1722]: time="2025-09-16T04:59:21.274377165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 04:59:21.275597 containerd[1722]: time="2025-09-16T04:59:21.275565991Z" level=info msg="CreateContainer within sandbox \"2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:59:21.307034 containerd[1722]: time="2025-09-16T04:59:21.307006137Z" level=info msg="Container 948e87ad2eed1ebcd4e75ff881158b3765d8bd7093b9a7b96340986dc23beace: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:21.326473 containerd[1722]: time="2025-09-16T04:59:21.326448427Z" level=info msg="CreateContainer within sandbox \"2b18c969edf53b8e73d8656ef23261dcb23809b405e092a0458b1349214e06d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"948e87ad2eed1ebcd4e75ff881158b3765d8bd7093b9a7b96340986dc23beace\"" Sep 16 04:59:21.327814 containerd[1722]: time="2025-09-16T04:59:21.327769196Z" level=info msg="StartContainer for \"948e87ad2eed1ebcd4e75ff881158b3765d8bd7093b9a7b96340986dc23beace\"" Sep 16 04:59:21.328775 containerd[1722]: time="2025-09-16T04:59:21.328744254Z" level=info msg="connecting to shim 948e87ad2eed1ebcd4e75ff881158b3765d8bd7093b9a7b96340986dc23beace" address="unix:///run/containerd/s/7d24851429ab757d3063659b8bff93f217f31897d4868c0bcefe8108b5e75a60" protocol=ttrpc version=3 Sep 16 04:59:21.348163 systemd[1]: Started cri-containerd-948e87ad2eed1ebcd4e75ff881158b3765d8bd7093b9a7b96340986dc23beace.scope - libcontainer container 948e87ad2eed1ebcd4e75ff881158b3765d8bd7093b9a7b96340986dc23beace. Sep 16 04:59:21.396766 containerd[1722]: time="2025-09-16T04:59:21.396744593Z" level=info msg="StartContainer for \"948e87ad2eed1ebcd4e75ff881158b3765d8bd7093b9a7b96340986dc23beace\" returns successfully" Sep 16 04:59:21.795241 containerd[1722]: time="2025-09-16T04:59:21.794186433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vrpt,Uid:10e3ae01-e0e2-43e2-bb04-56dda152ea83,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:21.795241 containerd[1722]: time="2025-09-16T04:59:21.794909701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86454cc5-22sb8,Uid:d5c57cb5-d83b-4987-b370-b0775dc8ba8b,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:59:21.795241 containerd[1722]: time="2025-09-16T04:59:21.795143571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f7dxh,Uid:8024408a-1213-49b8-a3b9-01ab011892e6,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:21.796168 containerd[1722]: time="2025-09-16T04:59:21.796133362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5864779d58-m7sl2,Uid:cbd62246-8f86-4b8d-8663-ee4d91537759,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:22.017246 kubelet[3168]: I0916 04:59:22.016961 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d86454cc5-clxmf" podStartSLOduration=38.780661303 podStartE2EDuration="41.016944197s" podCreationTimestamp="2025-09-16 04:58:41 +0000 UTC" firstStartedPulling="2025-09-16 04:59:19.037448208 +0000 UTC m=+53.338629329" lastFinishedPulling="2025-09-16 04:59:21.2737311 +0000 UTC m=+55.574912223" observedRunningTime="2025-09-16 04:59:22.016393934 +0000 UTC m=+56.317575068" watchObservedRunningTime="2025-09-16 04:59:22.016944197 +0000 UTC m=+56.318125329" Sep 16 04:59:22.048073 systemd-networkd[1578]: calib56ca15ebda: Link UP Sep 16 04:59:22.050194 systemd-networkd[1578]: calib56ca15ebda: Gained carrier Sep 16 04:59:22.065663 containerd[1722]: 2025-09-16 04:59:21.914 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0 csi-node-driver- calico-system 8024408a-1213-49b8-a3b9-01ab011892e6 683 0 2025-09-16 04:58:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.0.0-n-f9a9538521 csi-node-driver-f7dxh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib56ca15ebda [] [] }} ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Namespace="calico-system" Pod="csi-node-driver-f7dxh" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-" Sep 16 04:59:22.065663 containerd[1722]: 2025-09-16 04:59:21.914 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Namespace="calico-system" Pod="csi-node-driver-f7dxh" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" Sep 16 04:59:22.065663 containerd[1722]: 2025-09-16 04:59:21.973 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" HandleID="k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Workload="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:21.974 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" HandleID="k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Workload="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-f9a9538521", "pod":"csi-node-driver-f7dxh", "timestamp":"2025-09-16 04:59:21.973383904 +0000 UTC"}, Hostname:"ci-4459.0.0-n-f9a9538521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:21.975 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:21.975 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:21.975 [INFO][4838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-f9a9538521' Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:21.983 [INFO][4838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:21.992 [INFO][4838] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:22.004 [INFO][4838] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:22.007 [INFO][4838] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.065818 containerd[1722]: 2025-09-16 04:59:22.021 [INFO][4838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.066008 containerd[1722]: 2025-09-16 04:59:22.021 [INFO][4838] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.066008 containerd[1722]: 2025-09-16 04:59:22.025 [INFO][4838] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76 Sep 16 04:59:22.066008 containerd[1722]: 2025-09-16 04:59:22.034 [INFO][4838] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.066008 containerd[1722]: 2025-09-16 04:59:22.040 [INFO][4838] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.132/26] block=192.168.101.128/26 handle="k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.066008 containerd[1722]: 2025-09-16 04:59:22.040 [INFO][4838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.132/26] handle="k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.066008 containerd[1722]: 2025-09-16 04:59:22.040 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:22.066008 containerd[1722]: 2025-09-16 04:59:22.040 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.132/26] IPv6=[] ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" HandleID="k8s-pod-network.472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Workload="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" Sep 16 04:59:22.066171 containerd[1722]: 2025-09-16 04:59:22.041 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Namespace="calico-system" Pod="csi-node-driver-f7dxh" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8024408a-1213-49b8-a3b9-01ab011892e6", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"", Pod:"csi-node-driver-f7dxh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib56ca15ebda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:22.066232 containerd[1722]: 2025-09-16 04:59:22.043 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.132/32] ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Namespace="calico-system" Pod="csi-node-driver-f7dxh" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" Sep 16 04:59:22.066232 containerd[1722]: 2025-09-16 04:59:22.043 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib56ca15ebda ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Namespace="calico-system" Pod="csi-node-driver-f7dxh" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" Sep 16 04:59:22.066232 containerd[1722]: 2025-09-16 04:59:22.050 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Namespace="calico-system" Pod="csi-node-driver-f7dxh" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" Sep 16 04:59:22.066306 containerd[1722]: 2025-09-16 04:59:22.051 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Namespace="calico-system" Pod="csi-node-driver-f7dxh" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8024408a-1213-49b8-a3b9-01ab011892e6", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76", Pod:"csi-node-driver-f7dxh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib56ca15ebda", MAC:"fa:e6:8e:58:af:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:22.066775 containerd[1722]: 2025-09-16 04:59:22.064 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" Namespace="calico-system" Pod="csi-node-driver-f7dxh" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-csi--node--driver--f7dxh-eth0" Sep 16 04:59:22.107377 containerd[1722]: time="2025-09-16T04:59:22.107345984Z" level=info msg="connecting to shim 472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76" address="unix:///run/containerd/s/355401eb16c348d19e4baede0f4ba6dee9e7014cfa9cadc9c40e0a14eb9ab8f2" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:22.131244 systemd[1]: Started cri-containerd-472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76.scope - libcontainer container 472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76. Sep 16 04:59:22.152746 systemd-networkd[1578]: cali8634a247766: Link UP Sep 16 04:59:22.159833 systemd-networkd[1578]: cali8634a247766: Gained carrier Sep 16 04:59:22.195046 containerd[1722]: 2025-09-16 04:59:21.914 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0 calico-kube-controllers-5864779d58- calico-system cbd62246-8f86-4b8d-8663-ee4d91537759 815 0 2025-09-16 04:58:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5864779d58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.0.0-n-f9a9538521 calico-kube-controllers-5864779d58-m7sl2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8634a247766 [] [] }} ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Namespace="calico-system" Pod="calico-kube-controllers-5864779d58-m7sl2" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-" Sep 16 04:59:22.195046 containerd[1722]: 2025-09-16 04:59:21.914 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Namespace="calico-system" Pod="calico-kube-controllers-5864779d58-m7sl2" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" Sep 16 04:59:22.195046 containerd[1722]: 2025-09-16 04:59:22.005 [INFO][4833] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" HandleID="k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.006 [INFO][4833] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" HandleID="k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-f9a9538521", "pod":"calico-kube-controllers-5864779d58-m7sl2", "timestamp":"2025-09-16 04:59:22.004917374 +0000 UTC"}, Hostname:"ci-4459.0.0-n-f9a9538521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.006 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.040 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.040 [INFO][4833] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-f9a9538521' Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.083 [INFO][4833] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.090 [INFO][4833] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.096 [INFO][4833] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.100 [INFO][4833] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195303 containerd[1722]: 2025-09-16 04:59:22.102 [INFO][4833] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195564 containerd[1722]: 2025-09-16 04:59:22.102 [INFO][4833] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195564 containerd[1722]: 2025-09-16 04:59:22.103 [INFO][4833] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18 Sep 16 04:59:22.195564 containerd[1722]: 2025-09-16 04:59:22.128 [INFO][4833] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195564 containerd[1722]: 2025-09-16 04:59:22.135 [INFO][4833] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.133/26] block=192.168.101.128/26 handle="k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195564 containerd[1722]: 2025-09-16 04:59:22.135 [INFO][4833] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.133/26] handle="k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.195564 containerd[1722]: 2025-09-16 04:59:22.135 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:22.195564 containerd[1722]: 2025-09-16 04:59:22.135 [INFO][4833] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.133/26] IPv6=[] ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" HandleID="k8s-pod-network.653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" Sep 16 04:59:22.195760 containerd[1722]: 2025-09-16 04:59:22.141 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Namespace="calico-system" Pod="calico-kube-controllers-5864779d58-m7sl2" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0", GenerateName:"calico-kube-controllers-5864779d58-", Namespace:"calico-system", SelfLink:"", UID:"cbd62246-8f86-4b8d-8663-ee4d91537759", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5864779d58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"", Pod:"calico-kube-controllers-5864779d58-m7sl2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8634a247766", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:22.195835 containerd[1722]: 2025-09-16 04:59:22.141 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.133/32] ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Namespace="calico-system" Pod="calico-kube-controllers-5864779d58-m7sl2" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" Sep 16 04:59:22.195835 containerd[1722]: 2025-09-16 04:59:22.141 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8634a247766 ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Namespace="calico-system" Pod="calico-kube-controllers-5864779d58-m7sl2" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" Sep 16 04:59:22.195835 containerd[1722]: 2025-09-16 04:59:22.162 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Namespace="calico-system" Pod="calico-kube-controllers-5864779d58-m7sl2" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" Sep 16 04:59:22.195947 containerd[1722]: 2025-09-16 04:59:22.163 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Namespace="calico-system" Pod="calico-kube-controllers-5864779d58-m7sl2" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0", GenerateName:"calico-kube-controllers-5864779d58-", Namespace:"calico-system", SelfLink:"", UID:"cbd62246-8f86-4b8d-8663-ee4d91537759", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5864779d58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18", Pod:"calico-kube-controllers-5864779d58-m7sl2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8634a247766", MAC:"6a:ea:e9:62:f6:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:22.196031 containerd[1722]: 2025-09-16 04:59:22.189 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" Namespace="calico-system" Pod="calico-kube-controllers-5864779d58-m7sl2" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--kube--controllers--5864779d58--m7sl2-eth0" Sep 16 04:59:22.218401 containerd[1722]: time="2025-09-16T04:59:22.218361494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f7dxh,Uid:8024408a-1213-49b8-a3b9-01ab011892e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76\"" Sep 16 04:59:22.244095 systemd-networkd[1578]: cali8fce9f9c80b: Link UP Sep 16 04:59:22.245370 systemd-networkd[1578]: cali8fce9f9c80b: Gained carrier Sep 16 04:59:22.266374 containerd[1722]: 2025-09-16 04:59:21.929 [INFO][4794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0 calico-apiserver-5d86454cc5- calico-apiserver d5c57cb5-d83b-4987-b370-b0775dc8ba8b 812 0 2025-09-16 04:58:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d86454cc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-f9a9538521 calico-apiserver-5d86454cc5-22sb8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8fce9f9c80b [] [] }} ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-22sb8" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-" Sep 16 04:59:22.266374 containerd[1722]: 2025-09-16 04:59:21.929 [INFO][4794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-22sb8" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" Sep 16 04:59:22.266374 containerd[1722]: 2025-09-16 04:59:22.008 [INFO][4848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" HandleID="k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.008 [INFO][4848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" HandleID="k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-f9a9538521", "pod":"calico-apiserver-5d86454cc5-22sb8", "timestamp":"2025-09-16 04:59:22.008132544 +0000 UTC"}, Hostname:"ci-4459.0.0-n-f9a9538521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.008 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.137 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.137 [INFO][4848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-f9a9538521' Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.187 [INFO][4848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.200 [INFO][4848] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.205 [INFO][4848] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.207 [INFO][4848] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266733 containerd[1722]: 2025-09-16 04:59:22.209 [INFO][4848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266960 containerd[1722]: 2025-09-16 04:59:22.209 [INFO][4848] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266960 containerd[1722]: 2025-09-16 04:59:22.211 [INFO][4848] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c Sep 16 04:59:22.266960 containerd[1722]: 2025-09-16 04:59:22.216 [INFO][4848] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266960 containerd[1722]: 2025-09-16 04:59:22.234 [INFO][4848] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.134/26] block=192.168.101.128/26 handle="k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266960 containerd[1722]: 2025-09-16 04:59:22.235 [INFO][4848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.134/26] handle="k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.266960 containerd[1722]: 2025-09-16 04:59:22.235 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:22.266960 containerd[1722]: 2025-09-16 04:59:22.235 [INFO][4848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.134/26] IPv6=[] ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" HandleID="k8s-pod-network.fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Workload="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" Sep 16 04:59:22.268183 containerd[1722]: 2025-09-16 04:59:22.238 [INFO][4794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-22sb8" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0", GenerateName:"calico-apiserver-5d86454cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5c57cb5-d83b-4987-b370-b0775dc8ba8b", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d86454cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"", Pod:"calico-apiserver-5d86454cc5-22sb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fce9f9c80b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:22.268299 containerd[1722]: 2025-09-16 04:59:22.238 [INFO][4794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.134/32] ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-22sb8" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" Sep 16 04:59:22.268299 containerd[1722]: 2025-09-16 04:59:22.238 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fce9f9c80b ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-22sb8" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" Sep 16 04:59:22.268299 containerd[1722]: 2025-09-16 04:59:22.246 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-22sb8" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" Sep 16 04:59:22.268383 containerd[1722]: 2025-09-16 04:59:22.247 [INFO][4794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-22sb8" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0", GenerateName:"calico-apiserver-5d86454cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5c57cb5-d83b-4987-b370-b0775dc8ba8b", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d86454cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c", Pod:"calico-apiserver-5d86454cc5-22sb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fce9f9c80b", MAC:"7a:02:b6:82:80:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:22.268671 containerd[1722]: 2025-09-16 04:59:22.261 [INFO][4794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" Namespace="calico-apiserver" Pod="calico-apiserver-5d86454cc5-22sb8" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-calico--apiserver--5d86454cc5--22sb8-eth0" Sep 16 04:59:22.327807 containerd[1722]: time="2025-09-16T04:59:22.327702365Z" level=info msg="connecting to shim 653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18" address="unix:///run/containerd/s/10f83f6d731904a0ef3fba893cdc921cc961dd4ff51236ca9627c5670a566233" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:22.355151 systemd[1]: Started cri-containerd-653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18.scope - libcontainer container 653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18. Sep 16 04:59:22.366907 systemd-networkd[1578]: cali60815d99563: Link UP Sep 16 04:59:22.368058 systemd-networkd[1578]: cali60815d99563: Gained carrier Sep 16 04:59:22.383629 containerd[1722]: time="2025-09-16T04:59:22.383552591Z" level=info msg="connecting to shim fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c" address="unix:///run/containerd/s/0e7e8cbf3a3f413b477ff964beb0c715ae852c8f984e217f1e2e2472cf156727" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:22.403098 containerd[1722]: 2025-09-16 04:59:21.913 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0 coredns-668d6bf9bc- kube-system 10e3ae01-e0e2-43e2-bb04-56dda152ea83 817 0 2025-09-16 04:58:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.0.0-n-f9a9538521 coredns-668d6bf9bc-2vrpt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali60815d99563 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vrpt" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-" Sep 16 04:59:22.403098 containerd[1722]: 2025-09-16 04:59:21.914 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vrpt" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" Sep 16 04:59:22.403098 containerd[1722]: 2025-09-16 04:59:22.014 [INFO][4840] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" HandleID="k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Workload="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.014 [INFO][4840] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" HandleID="k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Workload="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.0.0-n-f9a9538521", "pod":"coredns-668d6bf9bc-2vrpt", "timestamp":"2025-09-16 04:59:22.014767959 +0000 UTC"}, Hostname:"ci-4459.0.0-n-f9a9538521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.014 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.235 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.235 [INFO][4840] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-f9a9538521' Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.287 [INFO][4840] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.298 [INFO][4840] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.307 [INFO][4840] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.312 [INFO][4840] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403590 containerd[1722]: 2025-09-16 04:59:22.314 [INFO][4840] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403845 containerd[1722]: 2025-09-16 04:59:22.314 [INFO][4840] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403845 containerd[1722]: 2025-09-16 04:59:22.315 [INFO][4840] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5 Sep 16 04:59:22.403845 containerd[1722]: 2025-09-16 04:59:22.323 [INFO][4840] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403845 containerd[1722]: 2025-09-16 04:59:22.354 [INFO][4840] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.135/26] block=192.168.101.128/26 handle="k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403845 containerd[1722]: 2025-09-16 04:59:22.356 [INFO][4840] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.135/26] handle="k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:22.403845 containerd[1722]: 2025-09-16 04:59:22.357 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:22.403845 containerd[1722]: 2025-09-16 04:59:22.357 [INFO][4840] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.135/26] IPv6=[] ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" HandleID="k8s-pod-network.00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Workload="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" Sep 16 04:59:22.404043 containerd[1722]: 2025-09-16 04:59:22.362 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vrpt" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"10e3ae01-e0e2-43e2-bb04-56dda152ea83", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"", Pod:"coredns-668d6bf9bc-2vrpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60815d99563", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:22.404043 containerd[1722]: 2025-09-16 04:59:22.363 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.135/32] ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vrpt" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" Sep 16 04:59:22.404043 containerd[1722]: 2025-09-16 04:59:22.363 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60815d99563 ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vrpt" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" Sep 16 04:59:22.404043 containerd[1722]: 2025-09-16 04:59:22.370 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vrpt" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" Sep 16 04:59:22.404043 containerd[1722]: 2025-09-16 04:59:22.372 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vrpt" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"10e3ae01-e0e2-43e2-bb04-56dda152ea83", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5", Pod:"coredns-668d6bf9bc-2vrpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60815d99563", MAC:"86:e3:ae:3b:42:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:22.404043 containerd[1722]: 2025-09-16 04:59:22.395 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vrpt" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--2vrpt-eth0" Sep 16 04:59:22.423165 systemd[1]: Started cri-containerd-fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c.scope - libcontainer container fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c. Sep 16 04:59:22.456713 containerd[1722]: time="2025-09-16T04:59:22.456504051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5864779d58-m7sl2,Uid:cbd62246-8f86-4b8d-8663-ee4d91537759,Namespace:calico-system,Attempt:0,} returns sandbox id \"653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18\"" Sep 16 04:59:22.459073 containerd[1722]: time="2025-09-16T04:59:22.459031361Z" level=info msg="connecting to shim 00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5" address="unix:///run/containerd/s/41ad7a5d8a0d19f3a25e6d53a3905fda312463cec3f8c9b20bd7c1a9f14320e5" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:22.497144 systemd[1]: Started cri-containerd-00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5.scope - libcontainer container 00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5. Sep 16 04:59:22.542908 containerd[1722]: time="2025-09-16T04:59:22.542854512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86454cc5-22sb8,Uid:d5c57cb5-d83b-4987-b370-b0775dc8ba8b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c\"" Sep 16 04:59:22.549200 containerd[1722]: time="2025-09-16T04:59:22.549170695Z" level=info msg="CreateContainer within sandbox \"fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:59:22.569966 containerd[1722]: time="2025-09-16T04:59:22.569935661Z" level=info msg="Container 619d10faaa7f4adf22fbf79240d2a0ed1fb1891f6c00ee3a7354c00f4b44b679: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:22.592603 containerd[1722]: time="2025-09-16T04:59:22.592534359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vrpt,Uid:10e3ae01-e0e2-43e2-bb04-56dda152ea83,Namespace:kube-system,Attempt:0,} returns sandbox id \"00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5\"" Sep 16 04:59:22.600164 containerd[1722]: time="2025-09-16T04:59:22.600120801Z" level=info msg="CreateContainer within sandbox \"00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:59:22.602813 containerd[1722]: time="2025-09-16T04:59:22.602713730Z" level=info msg="CreateContainer within sandbox \"fc421f8102e218c556112dfbd4f0bc00776ef58cb4ca6aaa01408e5c9a9b2a1c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"619d10faaa7f4adf22fbf79240d2a0ed1fb1891f6c00ee3a7354c00f4b44b679\"" Sep 16 04:59:22.603414 containerd[1722]: time="2025-09-16T04:59:22.603393776Z" level=info msg="StartContainer for \"619d10faaa7f4adf22fbf79240d2a0ed1fb1891f6c00ee3a7354c00f4b44b679\"" Sep 16 04:59:22.609846 containerd[1722]: time="2025-09-16T04:59:22.609810258Z" level=info msg="connecting to shim 619d10faaa7f4adf22fbf79240d2a0ed1fb1891f6c00ee3a7354c00f4b44b679" address="unix:///run/containerd/s/0e7e8cbf3a3f413b477ff964beb0c715ae852c8f984e217f1e2e2472cf156727" protocol=ttrpc version=3 Sep 16 04:59:22.631179 containerd[1722]: time="2025-09-16T04:59:22.631146674Z" level=info msg="Container 476661f2f19ade2525d3cc8bf5a1a7a999c7cf0be41294e09b914c578e50e9aa: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:22.632449 systemd[1]: Started cri-containerd-619d10faaa7f4adf22fbf79240d2a0ed1fb1891f6c00ee3a7354c00f4b44b679.scope - libcontainer container 619d10faaa7f4adf22fbf79240d2a0ed1fb1891f6c00ee3a7354c00f4b44b679. Sep 16 04:59:22.650703 containerd[1722]: time="2025-09-16T04:59:22.650324534Z" level=info msg="CreateContainer within sandbox \"00527918ac18767e58476c784f5460a3411ee2e19fad74fb46183f14f7cddcd5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"476661f2f19ade2525d3cc8bf5a1a7a999c7cf0be41294e09b914c578e50e9aa\"" Sep 16 04:59:22.650957 containerd[1722]: time="2025-09-16T04:59:22.650933288Z" level=info msg="StartContainer for \"476661f2f19ade2525d3cc8bf5a1a7a999c7cf0be41294e09b914c578e50e9aa\"" Sep 16 04:59:22.652260 containerd[1722]: time="2025-09-16T04:59:22.652234674Z" level=info msg="connecting to shim 476661f2f19ade2525d3cc8bf5a1a7a999c7cf0be41294e09b914c578e50e9aa" address="unix:///run/containerd/s/41ad7a5d8a0d19f3a25e6d53a3905fda312463cec3f8c9b20bd7c1a9f14320e5" protocol=ttrpc version=3 Sep 16 04:59:22.679182 systemd[1]: Started cri-containerd-476661f2f19ade2525d3cc8bf5a1a7a999c7cf0be41294e09b914c578e50e9aa.scope - libcontainer container 476661f2f19ade2525d3cc8bf5a1a7a999c7cf0be41294e09b914c578e50e9aa. Sep 16 04:59:22.794105 containerd[1722]: time="2025-09-16T04:59:22.793943187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fwjps,Uid:33e9df7e-07bb-4b00-8672-4a43dc40d07a,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:23.121339 containerd[1722]: time="2025-09-16T04:59:23.121246329Z" level=info msg="StartContainer for \"476661f2f19ade2525d3cc8bf5a1a7a999c7cf0be41294e09b914c578e50e9aa\" returns successfully" Sep 16 04:59:23.124749 containerd[1722]: time="2025-09-16T04:59:23.124679986Z" level=info msg="StartContainer for \"619d10faaa7f4adf22fbf79240d2a0ed1fb1891f6c00ee3a7354c00f4b44b679\" returns successfully" Sep 16 04:59:23.139351 kubelet[3168]: I0916 04:59:23.139331 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:23.156404 kubelet[3168]: I0916 04:59:23.155807 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2vrpt" podStartSLOduration=52.155789867 podStartE2EDuration="52.155789867s" podCreationTimestamp="2025-09-16 04:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:23.154154119 +0000 UTC m=+57.455335249" watchObservedRunningTime="2025-09-16 04:59:23.155789867 +0000 UTC m=+57.456971034" Sep 16 04:59:23.207256 kubelet[3168]: I0916 04:59:23.206383 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d86454cc5-22sb8" podStartSLOduration=42.206371031 podStartE2EDuration="42.206371031s" podCreationTimestamp="2025-09-16 04:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:23.181068794 +0000 UTC m=+57.482249944" watchObservedRunningTime="2025-09-16 04:59:23.206371031 +0000 UTC m=+57.507552203" Sep 16 04:59:23.387152 systemd-networkd[1578]: cali41211361d67: Link UP Sep 16 04:59:23.388680 systemd-networkd[1578]: cali41211361d67: Gained carrier Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.207 [INFO][5155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0 coredns-668d6bf9bc- kube-system 33e9df7e-07bb-4b00-8672-4a43dc40d07a 808 0 2025-09-16 04:58:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.0.0-n-f9a9538521 coredns-668d6bf9bc-fwjps eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali41211361d67 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Namespace="kube-system" Pod="coredns-668d6bf9bc-fwjps" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.207 [INFO][5155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Namespace="kube-system" Pod="coredns-668d6bf9bc-fwjps" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.268 [INFO][5174] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" HandleID="k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Workload="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.269 [INFO][5174] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" HandleID="k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Workload="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7840), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.0.0-n-f9a9538521", "pod":"coredns-668d6bf9bc-fwjps", "timestamp":"2025-09-16 04:59:23.268630524 +0000 UTC"}, Hostname:"ci-4459.0.0-n-f9a9538521", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.269 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.269 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.269 [INFO][5174] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-f9a9538521' Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.283 [INFO][5174] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.298 [INFO][5174] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.312 [INFO][5174] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.325 [INFO][5174] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.329 [INFO][5174] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.329 [INFO][5174] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.336 [INFO][5174] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.346 [INFO][5174] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.373 [INFO][5174] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.136/26] block=192.168.101.128/26 handle="k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.373 [INFO][5174] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.136/26] handle="k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" host="ci-4459.0.0-n-f9a9538521" Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.374 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:23.411128 containerd[1722]: 2025-09-16 04:59:23.374 [INFO][5174] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.136/26] IPv6=[] ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" HandleID="k8s-pod-network.b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Workload="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" Sep 16 04:59:23.413262 containerd[1722]: 2025-09-16 04:59:23.378 [INFO][5155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Namespace="kube-system" Pod="coredns-668d6bf9bc-fwjps" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"33e9df7e-07bb-4b00-8672-4a43dc40d07a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"", Pod:"coredns-668d6bf9bc-fwjps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41211361d67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:23.413262 containerd[1722]: 2025-09-16 04:59:23.378 [INFO][5155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.136/32] ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Namespace="kube-system" Pod="coredns-668d6bf9bc-fwjps" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" Sep 16 04:59:23.413262 containerd[1722]: 2025-09-16 04:59:23.379 [INFO][5155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41211361d67 ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Namespace="kube-system" Pod="coredns-668d6bf9bc-fwjps" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" Sep 16 04:59:23.413262 containerd[1722]: 2025-09-16 04:59:23.386 [INFO][5155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Namespace="kube-system" Pod="coredns-668d6bf9bc-fwjps" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" Sep 16 04:59:23.413262 containerd[1722]: 2025-09-16 04:59:23.387 [INFO][5155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Namespace="kube-system" Pod="coredns-668d6bf9bc-fwjps" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"33e9df7e-07bb-4b00-8672-4a43dc40d07a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-f9a9538521", ContainerID:"b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a", Pod:"coredns-668d6bf9bc-fwjps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41211361d67", MAC:"66:ae:12:4f:f7:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:23.413262 containerd[1722]: 2025-09-16 04:59:23.406 [INFO][5155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" Namespace="kube-system" Pod="coredns-668d6bf9bc-fwjps" WorkloadEndpoint="ci--4459.0.0--n--f9a9538521-k8s-coredns--668d6bf9bc--fwjps-eth0" Sep 16 04:59:23.462905 containerd[1722]: time="2025-09-16T04:59:23.462746967Z" level=info msg="connecting to shim b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a" address="unix:///run/containerd/s/e2d7a9692f548dec54f4c8b186652c590906f18ab93d091d1e328d80641ddbda" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:23.516383 systemd[1]: Started cri-containerd-b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a.scope - libcontainer container b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a. Sep 16 04:59:23.590783 containerd[1722]: time="2025-09-16T04:59:23.590728640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fwjps,Uid:33e9df7e-07bb-4b00-8672-4a43dc40d07a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a\"" Sep 16 04:59:23.594563 containerd[1722]: time="2025-09-16T04:59:23.594534119Z" level=info msg="CreateContainer within sandbox \"b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:59:23.616157 containerd[1722]: time="2025-09-16T04:59:23.616122501Z" level=info msg="Container 6b2de00ad5ec220cd61dea197995de7edb9c5d56ff02999550be6d50f66f57c4: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:23.634453 containerd[1722]: time="2025-09-16T04:59:23.634424132Z" level=info msg="CreateContainer within sandbox \"b5eea2e4dd6b4345d6d186ee709b38f8ff6b0b7fd9912d614543bc1b4400147a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b2de00ad5ec220cd61dea197995de7edb9c5d56ff02999550be6d50f66f57c4\"" Sep 16 04:59:23.635478 containerd[1722]: time="2025-09-16T04:59:23.635457942Z" level=info msg="StartContainer for \"6b2de00ad5ec220cd61dea197995de7edb9c5d56ff02999550be6d50f66f57c4\"" Sep 16 04:59:23.637085 containerd[1722]: time="2025-09-16T04:59:23.637032014Z" level=info msg="connecting to shim 6b2de00ad5ec220cd61dea197995de7edb9c5d56ff02999550be6d50f66f57c4" address="unix:///run/containerd/s/e2d7a9692f548dec54f4c8b186652c590906f18ab93d091d1e328d80641ddbda" protocol=ttrpc version=3 Sep 16 04:59:23.663214 systemd[1]: Started cri-containerd-6b2de00ad5ec220cd61dea197995de7edb9c5d56ff02999550be6d50f66f57c4.scope - libcontainer container 6b2de00ad5ec220cd61dea197995de7edb9c5d56ff02999550be6d50f66f57c4. Sep 16 04:59:23.712898 containerd[1722]: time="2025-09-16T04:59:23.712868851Z" level=info msg="StartContainer for \"6b2de00ad5ec220cd61dea197995de7edb9c5d56ff02999550be6d50f66f57c4\" returns successfully" Sep 16 04:59:24.066209 systemd-networkd[1578]: calib56ca15ebda: Gained IPv6LL Sep 16 04:59:24.066521 systemd-networkd[1578]: cali8634a247766: Gained IPv6LL Sep 16 04:59:24.066622 systemd-networkd[1578]: cali8fce9f9c80b: Gained IPv6LL Sep 16 04:59:24.129174 systemd-networkd[1578]: cali60815d99563: Gained IPv6LL Sep 16 04:59:24.145252 kubelet[3168]: I0916 04:59:24.144691 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:24.168350 kubelet[3168]: I0916 04:59:24.168292 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fwjps" podStartSLOduration=53.168182065 podStartE2EDuration="53.168182065s" podCreationTimestamp="2025-09-16 04:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:24.164091791 +0000 UTC m=+58.465272951" watchObservedRunningTime="2025-09-16 04:59:24.168182065 +0000 UTC m=+58.469363197" Sep 16 04:59:24.311488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592950723.mount: Deactivated successfully. Sep 16 04:59:24.559587 containerd[1722]: time="2025-09-16T04:59:24.559550525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:24.562030 containerd[1722]: time="2025-09-16T04:59:24.562003320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 16 04:59:24.564807 containerd[1722]: time="2025-09-16T04:59:24.564744955Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:24.568531 containerd[1722]: time="2025-09-16T04:59:24.568484609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:24.569166 containerd[1722]: time="2025-09-16T04:59:24.568954468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.2945439s" Sep 16 04:59:24.569166 containerd[1722]: time="2025-09-16T04:59:24.569005715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 16 04:59:24.570081 containerd[1722]: time="2025-09-16T04:59:24.570058628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 04:59:24.572082 containerd[1722]: time="2025-09-16T04:59:24.572054373Z" level=info msg="CreateContainer within sandbox \"10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 04:59:24.591264 containerd[1722]: time="2025-09-16T04:59:24.591234228Z" level=info msg="Container 11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:24.598406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012151549.mount: Deactivated successfully. Sep 16 04:59:24.615278 containerd[1722]: time="2025-09-16T04:59:24.615252002Z" level=info msg="CreateContainer within sandbox \"10ed7974d7b5fab9944a04287c0f677343bdc64734117fc3cce617728cc9596b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\"" Sep 16 04:59:24.616660 containerd[1722]: time="2025-09-16T04:59:24.616612986Z" level=info msg="StartContainer for \"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\"" Sep 16 04:59:24.617619 containerd[1722]: time="2025-09-16T04:59:24.617586359Z" level=info msg="connecting to shim 11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97" address="unix:///run/containerd/s/c1e076549a14f4bbc31b62f45b68d7785257dd90e2c02bfb44d9ed5a6018920d" protocol=ttrpc version=3 Sep 16 04:59:24.639132 systemd[1]: Started cri-containerd-11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97.scope - libcontainer container 11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97. Sep 16 04:59:24.695877 containerd[1722]: time="2025-09-16T04:59:24.695818923Z" level=info msg="StartContainer for \"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" returns successfully" Sep 16 04:59:25.025119 systemd-networkd[1578]: cali41211361d67: Gained IPv6LL Sep 16 04:59:25.166109 kubelet[3168]: I0916 04:59:25.165824 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-5ns66" podStartSLOduration=36.617660215 podStartE2EDuration="41.165806603s" podCreationTimestamp="2025-09-16 04:58:44 +0000 UTC" firstStartedPulling="2025-09-16 04:59:20.021745646 +0000 UTC m=+54.322926768" lastFinishedPulling="2025-09-16 04:59:24.569892028 +0000 UTC m=+58.871073156" observedRunningTime="2025-09-16 04:59:25.165387688 +0000 UTC m=+59.466568818" watchObservedRunningTime="2025-09-16 04:59:25.165806603 +0000 UTC m=+59.466987733" Sep 16 04:59:25.237348 containerd[1722]: time="2025-09-16T04:59:25.237311418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" id:\"4e526a66f467704062eb5b5ee7465c50ff29aad5fa48e5f4e4752a423ee94d80\" pid:5326 exit_status:1 exited_at:{seconds:1757998765 nanos:236914212}" Sep 16 04:59:26.073637 containerd[1722]: time="2025-09-16T04:59:26.073593989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:26.077376 containerd[1722]: time="2025-09-16T04:59:26.077339706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 16 04:59:26.081046 containerd[1722]: time="2025-09-16T04:59:26.081000286Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:26.084896 containerd[1722]: time="2025-09-16T04:59:26.084858384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:26.085398 containerd[1722]: time="2025-09-16T04:59:26.085248156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.515159254s" Sep 16 04:59:26.085398 containerd[1722]: time="2025-09-16T04:59:26.085284289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 16 04:59:26.087427 containerd[1722]: time="2025-09-16T04:59:26.087260939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 04:59:26.088208 containerd[1722]: time="2025-09-16T04:59:26.088183786Z" level=info msg="CreateContainer within sandbox \"472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 04:59:26.109253 containerd[1722]: time="2025-09-16T04:59:26.109224967Z" level=info msg="Container 0aa1c77d4f001b513469fb302d1cb61f8c889747fefe124c25cdb49fdc63122c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:26.115938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828468808.mount: Deactivated successfully. Sep 16 04:59:26.128990 containerd[1722]: time="2025-09-16T04:59:26.128963698Z" level=info msg="CreateContainer within sandbox \"472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0aa1c77d4f001b513469fb302d1cb61f8c889747fefe124c25cdb49fdc63122c\"" Sep 16 04:59:26.129543 containerd[1722]: time="2025-09-16T04:59:26.129521851Z" level=info msg="StartContainer for \"0aa1c77d4f001b513469fb302d1cb61f8c889747fefe124c25cdb49fdc63122c\"" Sep 16 04:59:26.131070 containerd[1722]: time="2025-09-16T04:59:26.131041292Z" level=info msg="connecting to shim 0aa1c77d4f001b513469fb302d1cb61f8c889747fefe124c25cdb49fdc63122c" address="unix:///run/containerd/s/355401eb16c348d19e4baede0f4ba6dee9e7014cfa9cadc9c40e0a14eb9ab8f2" protocol=ttrpc version=3 Sep 16 04:59:26.154230 systemd[1]: Started cri-containerd-0aa1c77d4f001b513469fb302d1cb61f8c889747fefe124c25cdb49fdc63122c.scope - libcontainer container 0aa1c77d4f001b513469fb302d1cb61f8c889747fefe124c25cdb49fdc63122c. Sep 16 04:59:26.217250 containerd[1722]: time="2025-09-16T04:59:26.217218789Z" level=info msg="StartContainer for \"0aa1c77d4f001b513469fb302d1cb61f8c889747fefe124c25cdb49fdc63122c\" returns successfully" Sep 16 04:59:26.266694 containerd[1722]: time="2025-09-16T04:59:26.266665217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" id:\"313c6baff0cb1a9fb6ed42c9d8405eeab3f8366e3ce37019ca60edb5e8566ab9\" pid:5382 exit_status:1 exited_at:{seconds:1757998766 nanos:266365665}" Sep 16 04:59:27.245244 containerd[1722]: time="2025-09-16T04:59:27.245197751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" id:\"f3e8b62bfb5f31c06d4bc69ea9a6b6c8ff5b647fb021674826ac5c0e0a5849dd\" pid:5418 exit_status:1 exited_at:{seconds:1757998767 nanos:244887872}" Sep 16 04:59:28.863875 containerd[1722]: time="2025-09-16T04:59:28.863823111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:28.866104 containerd[1722]: time="2025-09-16T04:59:28.865909910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 16 04:59:28.869257 containerd[1722]: time="2025-09-16T04:59:28.869229052Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:28.872758 containerd[1722]: time="2025-09-16T04:59:28.872724362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:28.873178 containerd[1722]: time="2025-09-16T04:59:28.873154728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.785860641s" Sep 16 04:59:28.873256 containerd[1722]: time="2025-09-16T04:59:28.873243509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 16 04:59:28.874132 containerd[1722]: time="2025-09-16T04:59:28.874106845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 04:59:28.886152 containerd[1722]: time="2025-09-16T04:59:28.886093554Z" level=info msg="CreateContainer within sandbox \"653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 04:59:28.907957 containerd[1722]: time="2025-09-16T04:59:28.907101882Z" level=info msg="Container a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:28.923202 containerd[1722]: time="2025-09-16T04:59:28.923176294Z" level=info msg="CreateContainer within sandbox \"653233208016ad7dfe1e71471022c68a16f5571e4179813b2388d81387a41c18\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\"" Sep 16 04:59:28.925729 containerd[1722]: time="2025-09-16T04:59:28.924046213Z" level=info msg="StartContainer for \"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\"" Sep 16 04:59:28.925729 containerd[1722]: time="2025-09-16T04:59:28.924948786Z" level=info msg="connecting to shim a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42" address="unix:///run/containerd/s/10f83f6d731904a0ef3fba893cdc921cc961dd4ff51236ca9627c5670a566233" protocol=ttrpc version=3 Sep 16 04:59:28.948121 systemd[1]: Started cri-containerd-a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42.scope - libcontainer container a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42. Sep 16 04:59:29.002089 containerd[1722]: time="2025-09-16T04:59:29.002058197Z" level=info msg="StartContainer for \"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\" returns successfully" Sep 16 04:59:29.185179 kubelet[3168]: I0916 04:59:29.184535 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5864779d58-m7sl2" podStartSLOduration=37.771591449 podStartE2EDuration="44.184517266s" podCreationTimestamp="2025-09-16 04:58:45 +0000 UTC" firstStartedPulling="2025-09-16 04:59:22.46110347 +0000 UTC m=+56.762284595" lastFinishedPulling="2025-09-16 04:59:28.874029284 +0000 UTC m=+63.175210412" observedRunningTime="2025-09-16 04:59:29.183460428 +0000 UTC m=+63.484641556" watchObservedRunningTime="2025-09-16 04:59:29.184517266 +0000 UTC m=+63.485698398" Sep 16 04:59:29.212353 containerd[1722]: time="2025-09-16T04:59:29.212318057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\" id:\"0300ec3701c70e100a0d108cc4828c3f543b2a101e0ed4009fa78dd285e12b32\" pid:5487 exited_at:{seconds:1757998769 nanos:212071340}" Sep 16 04:59:30.621098 containerd[1722]: time="2025-09-16T04:59:30.621052028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:30.623357 containerd[1722]: time="2025-09-16T04:59:30.623280688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 16 04:59:30.625621 containerd[1722]: time="2025-09-16T04:59:30.625596133Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:30.629016 containerd[1722]: time="2025-09-16T04:59:30.628794475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:30.629408 containerd[1722]: time="2025-09-16T04:59:30.629243522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.755104161s" Sep 16 04:59:30.629408 containerd[1722]: time="2025-09-16T04:59:30.629274159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 16 04:59:30.632024 containerd[1722]: time="2025-09-16T04:59:30.631978273Z" level=info msg="CreateContainer within sandbox \"472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 04:59:30.658012 containerd[1722]: time="2025-09-16T04:59:30.654630744Z" level=info msg="Container f460d4e2fb3de01ef997a13a3d80292cd7d5824d6ed86e6f11b9157e4c5eaa78: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:30.679815 containerd[1722]: time="2025-09-16T04:59:30.679775167Z" level=info msg="CreateContainer within sandbox \"472e11d5ce7b1ec3889a405f6d26733d524b814240aeed42dd3e1d9e30eb3b76\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f460d4e2fb3de01ef997a13a3d80292cd7d5824d6ed86e6f11b9157e4c5eaa78\"" Sep 16 04:59:30.680488 containerd[1722]: time="2025-09-16T04:59:30.680398142Z" level=info msg="StartContainer for \"f460d4e2fb3de01ef997a13a3d80292cd7d5824d6ed86e6f11b9157e4c5eaa78\"" Sep 16 04:59:30.681760 containerd[1722]: time="2025-09-16T04:59:30.681733038Z" level=info msg="connecting to shim f460d4e2fb3de01ef997a13a3d80292cd7d5824d6ed86e6f11b9157e4c5eaa78" address="unix:///run/containerd/s/355401eb16c348d19e4baede0f4ba6dee9e7014cfa9cadc9c40e0a14eb9ab8f2" protocol=ttrpc version=3 Sep 16 04:59:30.710126 systemd[1]: Started cri-containerd-f460d4e2fb3de01ef997a13a3d80292cd7d5824d6ed86e6f11b9157e4c5eaa78.scope - libcontainer container f460d4e2fb3de01ef997a13a3d80292cd7d5824d6ed86e6f11b9157e4c5eaa78. Sep 16 04:59:30.743671 containerd[1722]: time="2025-09-16T04:59:30.743609463Z" level=info msg="StartContainer for \"f460d4e2fb3de01ef997a13a3d80292cd7d5824d6ed86e6f11b9157e4c5eaa78\" returns successfully" Sep 16 04:59:30.889554 kubelet[3168]: I0916 04:59:30.888968 3168 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 04:59:30.889554 kubelet[3168]: I0916 04:59:30.889047 3168 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 04:59:31.188024 kubelet[3168]: I0916 04:59:31.186969 3168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f7dxh" podStartSLOduration=37.776937542 podStartE2EDuration="46.186949763s" podCreationTimestamp="2025-09-16 04:58:45 +0000 UTC" firstStartedPulling="2025-09-16 04:59:22.219950173 +0000 UTC m=+56.521131303" lastFinishedPulling="2025-09-16 04:59:30.629962403 +0000 UTC m=+64.931143524" observedRunningTime="2025-09-16 04:59:31.186777324 +0000 UTC m=+65.487958458" watchObservedRunningTime="2025-09-16 04:59:31.186949763 +0000 UTC m=+65.488130887" Sep 16 04:59:44.000699 containerd[1722]: time="2025-09-16T04:59:44.000654555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df\" id:\"50ba29a844c72e9547bf7438081c02db8065dd3960413ec7276713fcefa2e47c\" pid:5564 exited_at:{seconds:1757998784 nanos:421153}" Sep 16 04:59:44.080718 containerd[1722]: time="2025-09-16T04:59:44.080653484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df\" id:\"3c3257c19b99ef92c0eba8b67cc236865d7cb3e88501aac6daf115303ac21a7d\" pid:5587 exited_at:{seconds:1757998784 nanos:80325733}" Sep 16 04:59:45.408834 systemd[1]: Started sshd@7-10.200.8.40:22-10.200.16.10:44650.service - OpenSSH per-connection server daemon (10.200.16.10:44650). Sep 16 04:59:45.919144 update_engine[1694]: I20250916 04:59:45.919090 1694 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 16 04:59:45.919144 update_engine[1694]: I20250916 04:59:45.919142 1694 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 16 04:59:45.919584 update_engine[1694]: I20250916 04:59:45.919291 1694 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 16 04:59:45.920009 update_engine[1694]: I20250916 04:59:45.919641 1694 omaha_request_params.cc:62] Current group set to developer Sep 16 04:59:45.920009 update_engine[1694]: I20250916 04:59:45.919761 1694 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 16 04:59:45.920009 update_engine[1694]: I20250916 04:59:45.919768 1694 update_attempter.cc:643] Scheduling an action processor start. Sep 16 04:59:45.920009 update_engine[1694]: I20250916 04:59:45.919786 1694 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 16 04:59:45.920009 update_engine[1694]: I20250916 04:59:45.919811 1694 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 16 04:59:45.921426 update_engine[1694]: I20250916 04:59:45.921272 1694 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 16 04:59:45.921426 update_engine[1694]: I20250916 04:59:45.921296 1694 omaha_request_action.cc:272] Request: Sep 16 04:59:45.921426 update_engine[1694]: Sep 16 04:59:45.921426 update_engine[1694]: Sep 16 04:59:45.921426 update_engine[1694]: Sep 16 04:59:45.921426 update_engine[1694]: Sep 16 04:59:45.921426 update_engine[1694]: Sep 16 04:59:45.921426 update_engine[1694]: Sep 16 04:59:45.921426 update_engine[1694]: Sep 16 04:59:45.921426 update_engine[1694]: Sep 16 04:59:45.921426 update_engine[1694]: I20250916 04:59:45.921303 1694 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 16 04:59:45.921721 locksmithd[1785]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 16 04:59:45.922457 update_engine[1694]: I20250916 04:59:45.922417 1694 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 16 04:59:45.923237 update_engine[1694]: I20250916 04:59:45.923023 1694 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 16 04:59:45.960191 update_engine[1694]: E20250916 04:59:45.960085 1694 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 16 04:59:45.960191 update_engine[1694]: I20250916 04:59:45.960166 1694 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 16 04:59:46.046552 sshd[5604]: Accepted publickey for core from 10.200.16.10 port 44650 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:59:46.049451 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:46.059556 systemd-logind[1693]: New session 10 of user core. Sep 16 04:59:46.063282 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:59:46.402195 kubelet[3168]: I0916 04:59:46.401123 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:46.559465 containerd[1722]: time="2025-09-16T04:59:46.559424959Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" id:\"f3b582c19d76c83287dbe2d22eff0b9bf683a38701ae28225494f17157ecb515\" pid:5626 exited_at:{seconds:1757998786 nanos:559165997}" Sep 16 04:59:46.612211 sshd[5607]: Connection closed by 10.200.16.10 port 44650 Sep 16 04:59:46.612803 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:46.618156 systemd[1]: sshd@7-10.200.8.40:22-10.200.16.10:44650.service: Deactivated successfully. Sep 16 04:59:46.620631 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:59:46.622066 systemd-logind[1693]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:59:46.625461 systemd-logind[1693]: Removed session 10. Sep 16 04:59:50.425822 containerd[1722]: time="2025-09-16T04:59:50.425730014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\" id:\"b3a22a2c12698c6bf02a434e549dde0c6c5663fc56f884c4a435b1f49eb4ffe5\" pid:5655 exited_at:{seconds:1757998790 nanos:425488403}" Sep 16 04:59:51.731300 systemd[1]: Started sshd@8-10.200.8.40:22-10.200.16.10:32872.service - OpenSSH per-connection server daemon (10.200.16.10:32872). Sep 16 04:59:52.320237 kubelet[3168]: I0916 04:59:52.320098 3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:52.362238 sshd[5666]: Accepted publickey for core from 10.200.16.10 port 32872 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:59:52.365658 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:52.372282 systemd-logind[1693]: New session 11 of user core. Sep 16 04:59:52.378177 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:59:52.853120 sshd[5671]: Connection closed by 10.200.16.10 port 32872 Sep 16 04:59:52.852587 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:52.855862 systemd[1]: sshd@8-10.200.8.40:22-10.200.16.10:32872.service: Deactivated successfully. Sep 16 04:59:52.857917 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:59:52.858753 systemd-logind[1693]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:59:52.860910 systemd-logind[1693]: Removed session 11. Sep 16 04:59:55.906589 update_engine[1694]: I20250916 04:59:55.906533 1694 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 16 04:59:55.906971 update_engine[1694]: I20250916 04:59:55.906622 1694 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 16 04:59:55.907021 update_engine[1694]: I20250916 04:59:55.906972 1694 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 16 04:59:55.977875 update_engine[1694]: E20250916 04:59:55.977820 1694 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 16 04:59:55.978027 update_engine[1694]: I20250916 04:59:55.977934 1694 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 16 04:59:57.224635 containerd[1722]: time="2025-09-16T04:59:57.224571328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" id:\"d3b5774dae0f281caf283ac852448d695c657a656c53a40b12f9d5139eda4cdd\" pid:5702 exited_at:{seconds:1757998797 nanos:224326468}" Sep 16 04:59:57.968213 systemd[1]: Started sshd@9-10.200.8.40:22-10.200.16.10:32880.service - OpenSSH per-connection server daemon (10.200.16.10:32880). Sep 16 04:59:58.599774 sshd[5716]: Accepted publickey for core from 10.200.16.10 port 32880 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:59:58.600947 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:58.605348 systemd-logind[1693]: New session 12 of user core. Sep 16 04:59:58.611124 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:59:59.114258 sshd[5719]: Connection closed by 10.200.16.10 port 32880 Sep 16 04:59:59.114768 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:59.118058 systemd[1]: sshd@9-10.200.8.40:22-10.200.16.10:32880.service: Deactivated successfully. Sep 16 04:59:59.119887 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:59:59.120843 systemd-logind[1693]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:59:59.122620 systemd-logind[1693]: Removed session 12. Sep 16 04:59:59.200600 containerd[1722]: time="2025-09-16T04:59:59.200563257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\" id:\"b3350cd123ef097b2e567bd91d70c4c87bdfc5e80e7d68a933875b237f1ef36e\" pid:5743 exited_at:{seconds:1757998799 nanos:200380326}" Sep 16 04:59:59.230363 systemd[1]: Started sshd@10-10.200.8.40:22-10.200.16.10:32896.service - OpenSSH per-connection server daemon (10.200.16.10:32896). Sep 16 04:59:59.858274 sshd[5753]: Accepted publickey for core from 10.200.16.10 port 32896 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:59:59.859594 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:59.863861 systemd-logind[1693]: New session 13 of user core. Sep 16 04:59:59.868139 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 05:00:00.527770 sshd[5756]: Connection closed by 10.200.16.10 port 32896 Sep 16 05:00:00.528413 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:00.531860 systemd-logind[1693]: Session 13 logged out. Waiting for processes to exit. Sep 16 05:00:00.532414 systemd[1]: sshd@10-10.200.8.40:22-10.200.16.10:32896.service: Deactivated successfully. Sep 16 05:00:00.534728 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 05:00:00.536218 systemd-logind[1693]: Removed session 13. Sep 16 05:00:00.653261 systemd[1]: Started sshd@11-10.200.8.40:22-10.200.16.10:53460.service - OpenSSH per-connection server daemon (10.200.16.10:53460). Sep 16 05:00:01.281046 sshd[5766]: Accepted publickey for core from 10.200.16.10 port 53460 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:01.282138 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:01.286348 systemd-logind[1693]: New session 14 of user core. Sep 16 05:00:01.291124 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 05:00:01.774616 sshd[5769]: Connection closed by 10.200.16.10 port 53460 Sep 16 05:00:01.775161 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:01.780269 systemd[1]: sshd@11-10.200.8.40:22-10.200.16.10:53460.service: Deactivated successfully. Sep 16 05:00:01.782769 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 05:00:01.783471 systemd-logind[1693]: Session 14 logged out. Waiting for processes to exit. Sep 16 05:00:01.785070 systemd-logind[1693]: Removed session 14. Sep 16 05:00:05.908092 update_engine[1694]: I20250916 05:00:05.908022 1694 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 16 05:00:05.908463 update_engine[1694]: I20250916 05:00:05.908116 1694 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 16 05:00:05.908514 update_engine[1694]: I20250916 05:00:05.908491 1694 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 16 05:00:05.952758 update_engine[1694]: E20250916 05:00:05.952718 1694 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 16 05:00:05.952856 update_engine[1694]: I20250916 05:00:05.952807 1694 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 16 05:00:06.891212 systemd[1]: Started sshd@12-10.200.8.40:22-10.200.16.10:53470.service - OpenSSH per-connection server daemon (10.200.16.10:53470). Sep 16 05:00:07.518775 sshd[5790]: Accepted publickey for core from 10.200.16.10 port 53470 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:07.519973 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:07.524438 systemd-logind[1693]: New session 15 of user core. Sep 16 05:00:07.529130 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 05:00:08.010207 sshd[5793]: Connection closed by 10.200.16.10 port 53470 Sep 16 05:00:08.011684 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:08.014858 systemd-logind[1693]: Session 15 logged out. Waiting for processes to exit. Sep 16 05:00:08.015046 systemd[1]: sshd@12-10.200.8.40:22-10.200.16.10:53470.service: Deactivated successfully. Sep 16 05:00:08.017022 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 05:00:08.018970 systemd-logind[1693]: Removed session 15. Sep 16 05:00:13.122150 systemd[1]: Started sshd@13-10.200.8.40:22-10.200.16.10:53222.service - OpenSSH per-connection server daemon (10.200.16.10:53222). Sep 16 05:00:13.751226 sshd[5805]: Accepted publickey for core from 10.200.16.10 port 53222 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:13.752336 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:13.756672 systemd-logind[1693]: New session 16 of user core. Sep 16 05:00:13.763123 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 05:00:14.065176 containerd[1722]: time="2025-09-16T05:00:14.065059081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df\" id:\"c02f85d5df6f1241c4c9ce13091fe98c95106915fe46bdad45c6fbd4f2e1cad7\" pid:5821 exited_at:{seconds:1757998814 nanos:64686486}" Sep 16 05:00:14.238804 sshd[5808]: Connection closed by 10.200.16.10 port 53222 Sep 16 05:00:14.240146 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:14.243072 systemd[1]: sshd@13-10.200.8.40:22-10.200.16.10:53222.service: Deactivated successfully. Sep 16 05:00:14.245126 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 05:00:14.245972 systemd-logind[1693]: Session 16 logged out. Waiting for processes to exit. Sep 16 05:00:14.247609 systemd-logind[1693]: Removed session 16. Sep 16 05:00:15.905096 update_engine[1694]: I20250916 05:00:15.905038 1694 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 16 05:00:15.905543 update_engine[1694]: I20250916 05:00:15.905119 1694 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 16 05:00:15.905543 update_engine[1694]: I20250916 05:00:15.905483 1694 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 16 05:00:15.911410 update_engine[1694]: E20250916 05:00:15.911381 1694 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 16 05:00:15.911496 update_engine[1694]: I20250916 05:00:15.911432 1694 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 16 05:00:15.911496 update_engine[1694]: I20250916 05:00:15.911439 1694 omaha_request_action.cc:617] Omaha request response: Sep 16 05:00:15.911547 update_engine[1694]: E20250916 05:00:15.911512 1694 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 16 05:00:15.911547 update_engine[1694]: I20250916 05:00:15.911530 1694 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 16 05:00:15.911547 update_engine[1694]: I20250916 05:00:15.911533 1694 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 16 05:00:15.911547 update_engine[1694]: I20250916 05:00:15.911538 1694 update_attempter.cc:306] Processing Done. Sep 16 05:00:15.911629 update_engine[1694]: E20250916 05:00:15.911553 1694 update_attempter.cc:619] Update failed. Sep 16 05:00:15.911629 update_engine[1694]: I20250916 05:00:15.911558 1694 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 16 05:00:15.911629 update_engine[1694]: I20250916 05:00:15.911563 1694 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 16 05:00:15.911629 update_engine[1694]: I20250916 05:00:15.911568 1694 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 16 05:00:15.911717 update_engine[1694]: I20250916 05:00:15.911643 1694 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 16 05:00:15.911717 update_engine[1694]: I20250916 05:00:15.911667 1694 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 16 05:00:15.911717 update_engine[1694]: I20250916 05:00:15.911671 1694 omaha_request_action.cc:272] Request: Sep 16 05:00:15.911717 update_engine[1694]: Sep 16 05:00:15.911717 update_engine[1694]: Sep 16 05:00:15.911717 update_engine[1694]: Sep 16 05:00:15.911717 update_engine[1694]: Sep 16 05:00:15.911717 update_engine[1694]: Sep 16 05:00:15.911717 update_engine[1694]: Sep 16 05:00:15.911717 update_engine[1694]: I20250916 05:00:15.911677 1694 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 16 05:00:15.911717 update_engine[1694]: I20250916 05:00:15.911693 1694 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 16 05:00:15.911973 update_engine[1694]: I20250916 05:00:15.911947 1694 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 16 05:00:15.912306 locksmithd[1785]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 16 05:00:15.929780 update_engine[1694]: E20250916 05:00:15.929746 1694 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 16 05:00:15.929868 update_engine[1694]: I20250916 05:00:15.929827 1694 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 16 05:00:15.929868 update_engine[1694]: I20250916 05:00:15.929837 1694 omaha_request_action.cc:617] Omaha request response: Sep 16 05:00:15.929868 update_engine[1694]: I20250916 05:00:15.929844 1694 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 16 05:00:15.929868 update_engine[1694]: I20250916 05:00:15.929848 1694 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 16 05:00:15.929868 update_engine[1694]: I20250916 05:00:15.929852 1694 update_attempter.cc:306] Processing Done. Sep 16 05:00:15.929868 update_engine[1694]: I20250916 05:00:15.929857 1694 update_attempter.cc:310] Error event sent. Sep 16 05:00:15.929868 update_engine[1694]: I20250916 05:00:15.929865 1694 update_check_scheduler.cc:74] Next update check in 49m19s Sep 16 05:00:15.930194 locksmithd[1785]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 16 05:00:19.353333 systemd[1]: Started sshd@14-10.200.8.40:22-10.200.16.10:53238.service - OpenSSH per-connection server daemon (10.200.16.10:53238). Sep 16 05:00:19.982907 sshd[5845]: Accepted publickey for core from 10.200.16.10 port 53238 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:19.983940 sshd-session[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:19.987840 systemd-logind[1693]: New session 17 of user core. Sep 16 05:00:19.992265 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 05:00:20.478654 sshd[5848]: Connection closed by 10.200.16.10 port 53238 Sep 16 05:00:20.479203 sshd-session[5845]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:20.482931 systemd-logind[1693]: Session 17 logged out. Waiting for processes to exit. Sep 16 05:00:20.483622 systemd[1]: sshd@14-10.200.8.40:22-10.200.16.10:53238.service: Deactivated successfully. Sep 16 05:00:20.485668 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 05:00:20.487730 systemd-logind[1693]: Removed session 17. Sep 16 05:00:25.591115 systemd[1]: Started sshd@15-10.200.8.40:22-10.200.16.10:34398.service - OpenSSH per-connection server daemon (10.200.16.10:34398). Sep 16 05:00:26.218072 sshd[5860]: Accepted publickey for core from 10.200.16.10 port 34398 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:26.219206 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:26.223455 systemd-logind[1693]: New session 18 of user core. Sep 16 05:00:26.228129 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 05:00:26.712066 sshd[5865]: Connection closed by 10.200.16.10 port 34398 Sep 16 05:00:26.712583 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:26.715856 systemd[1]: sshd@15-10.200.8.40:22-10.200.16.10:34398.service: Deactivated successfully. Sep 16 05:00:26.717755 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 05:00:26.718875 systemd-logind[1693]: Session 18 logged out. Waiting for processes to exit. Sep 16 05:00:26.720360 systemd-logind[1693]: Removed session 18. Sep 16 05:00:27.223421 containerd[1722]: time="2025-09-16T05:00:27.223374426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" id:\"db708551bfdcfb7f2f92d4bb7b13fd771d1ca3189c55a1d090b20c606f600f61\" pid:5889 exited_at:{seconds:1757998827 nanos:222837062}" Sep 16 05:00:29.208191 containerd[1722]: time="2025-09-16T05:00:29.208149909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\" id:\"45873a1f9389c73b038916f881f1210dd9e7ff6b8acba19a5146e074af4744b6\" pid:5912 exited_at:{seconds:1757998829 nanos:207893877}" Sep 16 05:00:31.824346 systemd[1]: Started sshd@16-10.200.8.40:22-10.200.16.10:59382.service - OpenSSH per-connection server daemon (10.200.16.10:59382). Sep 16 05:00:32.456119 sshd[5922]: Accepted publickey for core from 10.200.16.10 port 59382 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:32.457277 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:32.461733 systemd-logind[1693]: New session 19 of user core. Sep 16 05:00:32.471119 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 05:00:32.956821 sshd[5927]: Connection closed by 10.200.16.10 port 59382 Sep 16 05:00:32.957356 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:32.960972 systemd-logind[1693]: Session 19 logged out. Waiting for processes to exit. Sep 16 05:00:32.961247 systemd[1]: sshd@16-10.200.8.40:22-10.200.16.10:59382.service: Deactivated successfully. Sep 16 05:00:32.963209 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 05:00:32.965143 systemd-logind[1693]: Removed session 19. Sep 16 05:00:33.071042 systemd[1]: Started sshd@17-10.200.8.40:22-10.200.16.10:59398.service - OpenSSH per-connection server daemon (10.200.16.10:59398). Sep 16 05:00:33.700324 sshd[5939]: Accepted publickey for core from 10.200.16.10 port 59398 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:33.701359 sshd-session[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:33.704955 systemd-logind[1693]: New session 20 of user core. Sep 16 05:00:33.711119 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 05:00:34.265629 sshd[5942]: Connection closed by 10.200.16.10 port 59398 Sep 16 05:00:34.266173 sshd-session[5939]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:34.269568 systemd[1]: sshd@17-10.200.8.40:22-10.200.16.10:59398.service: Deactivated successfully. Sep 16 05:00:34.271514 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 05:00:34.272230 systemd-logind[1693]: Session 20 logged out. Waiting for processes to exit. Sep 16 05:00:34.274041 systemd-logind[1693]: Removed session 20. Sep 16 05:00:34.382719 systemd[1]: Started sshd@18-10.200.8.40:22-10.200.16.10:59406.service - OpenSSH per-connection server daemon (10.200.16.10:59406). Sep 16 05:00:35.007810 sshd[5952]: Accepted publickey for core from 10.200.16.10 port 59406 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:35.009061 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:35.013516 systemd-logind[1693]: New session 21 of user core. Sep 16 05:00:35.018107 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 05:00:35.981613 sshd[5955]: Connection closed by 10.200.16.10 port 59406 Sep 16 05:00:35.982151 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:35.985125 systemd[1]: sshd@18-10.200.8.40:22-10.200.16.10:59406.service: Deactivated successfully. Sep 16 05:00:35.987162 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 05:00:35.988600 systemd-logind[1693]: Session 21 logged out. Waiting for processes to exit. Sep 16 05:00:35.990699 systemd-logind[1693]: Removed session 21. Sep 16 05:00:36.091916 systemd[1]: Started sshd@19-10.200.8.40:22-10.200.16.10:59414.service - OpenSSH per-connection server daemon (10.200.16.10:59414). Sep 16 05:00:36.718543 sshd[5978]: Accepted publickey for core from 10.200.16.10 port 59414 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:36.719966 sshd-session[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:36.727886 systemd-logind[1693]: New session 22 of user core. Sep 16 05:00:36.730758 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:00:37.379885 sshd[5981]: Connection closed by 10.200.16.10 port 59414 Sep 16 05:00:37.382182 sshd-session[5978]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:37.387160 systemd[1]: sshd@19-10.200.8.40:22-10.200.16.10:59414.service: Deactivated successfully. Sep 16 05:00:37.390502 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:00:37.391951 systemd-logind[1693]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:00:37.395038 systemd-logind[1693]: Removed session 22. Sep 16 05:00:37.492092 systemd[1]: Started sshd@20-10.200.8.40:22-10.200.16.10:59424.service - OpenSSH per-connection server daemon (10.200.16.10:59424). Sep 16 05:00:38.139245 sshd[5991]: Accepted publickey for core from 10.200.16.10 port 59424 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:38.141088 sshd-session[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:38.146065 systemd-logind[1693]: New session 23 of user core. Sep 16 05:00:38.152508 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 05:00:38.678434 sshd[5994]: Connection closed by 10.200.16.10 port 59424 Sep 16 05:00:38.679712 sshd-session[5991]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:38.685386 systemd[1]: sshd@20-10.200.8.40:22-10.200.16.10:59424.service: Deactivated successfully. Sep 16 05:00:38.688495 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 05:00:38.690681 systemd-logind[1693]: Session 23 logged out. Waiting for processes to exit. Sep 16 05:00:38.691828 systemd-logind[1693]: Removed session 23. Sep 16 05:00:43.798355 systemd[1]: Started sshd@21-10.200.8.40:22-10.200.16.10:52714.service - OpenSSH per-connection server daemon (10.200.16.10:52714). Sep 16 05:00:44.064958 containerd[1722]: time="2025-09-16T05:00:44.064835930Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df\" id:\"bb81c1f9e64d8e2376e9d8eb6f48f8a0a464b45279d536a36d82de05f7632975\" pid:6025 exited_at:{seconds:1757998844 nanos:64420862}" Sep 16 05:00:44.443382 sshd[6010]: Accepted publickey for core from 10.200.16.10 port 52714 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:44.445577 sshd-session[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:44.451765 systemd-logind[1693]: New session 24 of user core. Sep 16 05:00:44.463155 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 05:00:44.973784 sshd[6038]: Connection closed by 10.200.16.10 port 52714 Sep 16 05:00:44.975271 sshd-session[6010]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:44.979824 systemd[1]: sshd@21-10.200.8.40:22-10.200.16.10:52714.service: Deactivated successfully. Sep 16 05:00:44.980164 systemd-logind[1693]: Session 24 logged out. Waiting for processes to exit. Sep 16 05:00:44.982439 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 05:00:44.984077 systemd-logind[1693]: Removed session 24. Sep 16 05:00:46.513257 containerd[1722]: time="2025-09-16T05:00:46.513149182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" id:\"4769210ad2d7c7286b2d77d44a7eae5cd5657d305dfee46292a2735af06e52e7\" pid:6064 exited_at:{seconds:1757998846 nanos:512909460}" Sep 16 05:00:50.085545 systemd[1]: Started sshd@22-10.200.8.40:22-10.200.16.10:45216.service - OpenSSH per-connection server daemon (10.200.16.10:45216). Sep 16 05:00:50.429912 containerd[1722]: time="2025-09-16T05:00:50.429870957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\" id:\"d1c4799b511ba8b2d0eabdff0af56c5905d217eb739171b64367450c8b3851b4\" pid:6099 exited_at:{seconds:1757998850 nanos:429244006}" Sep 16 05:00:50.716086 sshd[6076]: Accepted publickey for core from 10.200.16.10 port 45216 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:50.717039 sshd-session[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:50.721204 systemd-logind[1693]: New session 25 of user core. Sep 16 05:00:50.728159 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 05:00:51.231736 sshd[6109]: Connection closed by 10.200.16.10 port 45216 Sep 16 05:00:51.232314 sshd-session[6076]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:51.239856 systemd[1]: sshd@22-10.200.8.40:22-10.200.16.10:45216.service: Deactivated successfully. Sep 16 05:00:51.243529 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 05:00:51.245626 systemd-logind[1693]: Session 25 logged out. Waiting for processes to exit. Sep 16 05:00:51.247897 systemd-logind[1693]: Removed session 25. Sep 16 05:00:56.344263 systemd[1]: Started sshd@23-10.200.8.40:22-10.200.16.10:45224.service - OpenSSH per-connection server daemon (10.200.16.10:45224). Sep 16 05:00:56.985741 sshd[6135]: Accepted publickey for core from 10.200.16.10 port 45224 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:56.987517 sshd-session[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:56.995796 systemd-logind[1693]: New session 26 of user core. Sep 16 05:00:57.003165 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 05:00:57.240733 containerd[1722]: time="2025-09-16T05:00:57.240627833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d3f8b9ca49d0d4197442098f64d392ee45432aa3cc8ddbf4253c1ba1f33f97\" id:\"1ac46818d763333e091e973a445fa520cfc951361a84ed874093be0d4c9ff4c6\" pid:6151 exited_at:{seconds:1757998857 nanos:240227640}" Sep 16 05:00:57.550462 sshd[6138]: Connection closed by 10.200.16.10 port 45224 Sep 16 05:00:57.551378 sshd-session[6135]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:57.557476 systemd[1]: sshd@23-10.200.8.40:22-10.200.16.10:45224.service: Deactivated successfully. Sep 16 05:00:57.557969 systemd-logind[1693]: Session 26 logged out. Waiting for processes to exit. Sep 16 05:00:57.560600 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 05:00:57.563391 systemd-logind[1693]: Removed session 26. Sep 16 05:00:59.216155 containerd[1722]: time="2025-09-16T05:00:59.216110329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9db73d278db7d439020fde75068c40a86b82ca0ce53d800f9b6840026259c42\" id:\"7c3a2fb29a2f6c90441c6b37fb7f11bb50e9fa015f727ab833a1a2aa10ee7680\" pid:6183 exited_at:{seconds:1757998859 nanos:215785629}" Sep 16 05:01:02.666349 systemd[1]: Started sshd@24-10.200.8.40:22-10.200.16.10:59322.service - OpenSSH per-connection server daemon (10.200.16.10:59322). Sep 16 05:01:03.305825 sshd[6195]: Accepted publickey for core from 10.200.16.10 port 59322 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:03.306940 sshd-session[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:03.311572 systemd-logind[1693]: New session 27 of user core. Sep 16 05:01:03.316126 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 05:01:03.860695 sshd[6198]: Connection closed by 10.200.16.10 port 59322 Sep 16 05:01:03.862477 sshd-session[6195]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:03.865933 systemd[1]: sshd@24-10.200.8.40:22-10.200.16.10:59322.service: Deactivated successfully. Sep 16 05:01:03.869181 systemd-logind[1693]: Session 27 logged out. Waiting for processes to exit. Sep 16 05:01:03.869783 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 05:01:03.874242 systemd-logind[1693]: Removed session 27. Sep 16 05:01:08.973450 systemd[1]: Started sshd@25-10.200.8.40:22-10.200.16.10:59336.service - OpenSSH per-connection server daemon (10.200.16.10:59336). Sep 16 05:01:09.601730 sshd[6211]: Accepted publickey for core from 10.200.16.10 port 59336 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:09.602828 sshd-session[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:09.607352 systemd-logind[1693]: New session 28 of user core. Sep 16 05:01:09.611135 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 16 05:01:10.139138 sshd[6215]: Connection closed by 10.200.16.10 port 59336 Sep 16 05:01:10.142173 sshd-session[6211]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:10.146594 systemd[1]: sshd@25-10.200.8.40:22-10.200.16.10:59336.service: Deactivated successfully. Sep 16 05:01:10.149814 systemd[1]: session-28.scope: Deactivated successfully. Sep 16 05:01:10.152231 systemd-logind[1693]: Session 28 logged out. Waiting for processes to exit. Sep 16 05:01:10.154784 systemd-logind[1693]: Removed session 28. Sep 16 05:01:14.064612 containerd[1722]: time="2025-09-16T05:01:14.064563519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efd5efac5937e6a7f1c537a5fab96fff14641f555ce5322a34c96727a30522df\" id:\"7f69e4e0ab7d19b744d5e256c965cefb4cfd5a3486da0d80dba0718b401fe14d\" pid:6238 exited_at:{seconds:1757998874 nanos:63645073}"