Jun 21 04:44:10.976879 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 04:44:10.976911 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:10.976921 kernel: BIOS-provided physical RAM map: Jun 21 04:44:10.976929 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 21 04:44:10.976936 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 21 04:44:10.976942 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 21 04:44:10.976952 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Jun 21 04:44:10.976959 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable Jun 21 04:44:10.976966 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data Jun 21 04:44:10.976973 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 21 04:44:10.976980 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 21 04:44:10.976987 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 21 04:44:10.976994 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 21 04:44:10.977002 kernel: NX (Execute Disable) protection: active Jun 21 04:44:10.977011 kernel: APIC: Static calls initialized Jun 21 04:44:10.977019 kernel: efi: EFI v2.7 by Microsoft Jun 21 04:44:10.977026 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebb9a98 RNG=0x3ffd2018 Jun 21 04:44:10.977032 kernel: random: crng init done Jun 21 04:44:10.977039 kernel: secureboot: Secure boot disabled Jun 21 04:44:10.977045 kernel: SMBIOS 3.1.0 present. Jun 21 04:44:10.977052 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/21/2024 Jun 21 04:44:10.977058 kernel: DMI: Memory slots populated: 2/2 Jun 21 04:44:10.977066 kernel: Hypervisor detected: Microsoft Hyper-V Jun 21 04:44:10.977072 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 21 04:44:10.977078 kernel: Hyper-V: Nested features: 0x3e0101 Jun 21 04:44:10.977085 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 21 04:44:10.977091 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 21 04:44:10.977098 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 21 04:44:10.977104 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 21 04:44:10.977111 kernel: tsc: Detected 2300.000 MHz processor Jun 21 04:44:10.977117 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 04:44:10.977124 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 04:44:10.977131 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 21 04:44:10.977139 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 21 04:44:10.977146 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 04:44:10.977153 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 21 04:44:10.977159 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 21 04:44:10.977166 kernel: Using GB pages for direct mapping Jun 21 04:44:10.977173 kernel: ACPI: Early table checksum verification disabled Jun 21 04:44:10.977180 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 21 04:44:10.977191 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:10.977198 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:10.977205 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 21 04:44:10.977212 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 21 04:44:10.977219 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:10.977226 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:10.977234 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:10.977241 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 21 04:44:10.977248 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 21 04:44:10.977255 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:10.977262 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 21 04:44:10.977269 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Jun 21 04:44:10.977276 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 21 04:44:10.977283 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 21 04:44:10.977290 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 21 04:44:10.977298 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 21 04:44:10.977305 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 21 04:44:10.977312 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 21 04:44:10.977319 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 21 04:44:10.977326 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 21 04:44:10.977333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 21 04:44:10.977340 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 21 04:44:10.977347 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 21 04:44:10.977354 kernel: Zone ranges: Jun 21 04:44:10.977362 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 04:44:10.977369 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 21 04:44:10.977376 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 21 04:44:10.977383 kernel: Device empty Jun 21 04:44:10.977390 kernel: Movable zone start for each node Jun 21 04:44:10.977397 kernel: Early memory node ranges Jun 21 04:44:10.977403 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 21 04:44:10.977411 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 21 04:44:10.977418 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] Jun 21 04:44:10.977426 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 21 04:44:10.977432 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 21 04:44:10.977439 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 21 04:44:10.977446 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 04:44:10.977453 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 21 04:44:10.977460 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Jun 21 04:44:10.977467 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges Jun 21 04:44:10.977474 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 21 04:44:10.977481 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 04:44:10.977489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 04:44:10.977496 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 04:44:10.977503 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 21 04:44:10.977510 kernel: TSC deadline timer available Jun 21 04:44:10.977517 kernel: CPU topo: Max. logical packages: 1 Jun 21 04:44:10.977524 kernel: CPU topo: Max. logical dies: 1 Jun 21 04:44:10.977530 kernel: CPU topo: Max. dies per package: 1 Jun 21 04:44:10.977537 kernel: CPU topo: Max. threads per core: 2 Jun 21 04:44:10.977544 kernel: CPU topo: Num. cores per package: 1 Jun 21 04:44:10.977552 kernel: CPU topo: Num. threads per package: 2 Jun 21 04:44:10.977559 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 04:44:10.977566 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 21 04:44:10.977573 kernel: Booting paravirtualized kernel on Hyper-V Jun 21 04:44:10.977580 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 04:44:10.977587 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 04:44:10.977594 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 04:44:10.977601 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 04:44:10.977608 kernel: pcpu-alloc: [0] 0 1 Jun 21 04:44:10.977616 kernel: Hyper-V: PV spinlocks enabled Jun 21 04:44:10.977623 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 21 04:44:10.977631 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:10.977639 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 04:44:10.977646 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 21 04:44:10.977653 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 04:44:10.977660 kernel: Fallback order for Node 0: 0 Jun 21 04:44:10.977667 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2096877 Jun 21 04:44:10.977675 kernel: Policy zone: Normal Jun 21 04:44:10.977682 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 04:44:10.977689 kernel: software IO TLB: area num 2. Jun 21 04:44:10.977696 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 04:44:10.977703 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 04:44:10.977710 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 04:44:10.977716 kernel: Dynamic Preempt: voluntary Jun 21 04:44:10.977723 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 04:44:10.977731 kernel: rcu: RCU event tracing is enabled. Jun 21 04:44:10.977767 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 04:44:10.977783 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 04:44:10.977791 kernel: Rude variant of Tasks RCU enabled. Jun 21 04:44:10.977801 kernel: Tracing variant of Tasks RCU enabled. Jun 21 04:44:10.977810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 04:44:10.977819 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 04:44:10.977827 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:10.977836 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:10.977845 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:10.977853 kernel: Using NULL legacy PIC Jun 21 04:44:10.977862 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 21 04:44:10.977886 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 04:44:10.977896 kernel: Console: colour dummy device 80x25 Jun 21 04:44:10.977904 kernel: printk: legacy console [tty1] enabled Jun 21 04:44:10.977913 kernel: printk: legacy console [ttyS0] enabled Jun 21 04:44:10.977921 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 21 04:44:10.977930 kernel: ACPI: Core revision 20240827 Jun 21 04:44:10.977941 kernel: Failed to register legacy timer interrupt Jun 21 04:44:10.977950 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 04:44:10.977958 kernel: x2apic enabled Jun 21 04:44:10.977967 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 04:44:10.977975 kernel: Hyper-V: Host Build 10.0.26100.1255-1-0 Jun 21 04:44:10.977984 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 21 04:44:10.977993 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 21 04:44:10.978001 kernel: Hyper-V: Using IPI hypercalls Jun 21 04:44:10.978009 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 21 04:44:10.978020 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 21 04:44:10.978027 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 21 04:44:10.978035 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 21 04:44:10.978043 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 21 04:44:10.978050 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 21 04:44:10.978058 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 21 04:44:10.978066 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jun 21 04:44:10.978074 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 21 04:44:10.978081 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 21 04:44:10.978090 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 21 04:44:10.978098 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 04:44:10.978105 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 04:44:10.978112 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 04:44:10.978120 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 21 04:44:10.978127 kernel: RETBleed: Vulnerable Jun 21 04:44:10.978135 kernel: Speculative Store Bypass: Vulnerable Jun 21 04:44:10.978142 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 21 04:44:10.978149 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 04:44:10.978157 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 04:44:10.978164 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 04:44:10.978172 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 21 04:44:10.978180 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 21 04:44:10.978187 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 21 04:44:10.978195 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 21 04:44:10.978202 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 21 04:44:10.978209 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 21 04:44:10.978216 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 04:44:10.978224 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 21 04:44:10.978231 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 21 04:44:10.978239 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 21 04:44:10.978247 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 21 04:44:10.978256 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 21 04:44:10.978264 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 21 04:44:10.978273 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 21 04:44:10.978281 kernel: Freeing SMP alternatives memory: 32K Jun 21 04:44:10.978289 kernel: pid_max: default: 32768 minimum: 301 Jun 21 04:44:10.978297 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 04:44:10.978306 kernel: landlock: Up and running. Jun 21 04:44:10.978314 kernel: SELinux: Initializing. Jun 21 04:44:10.978322 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 04:44:10.978331 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 04:44:10.978339 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 21 04:44:10.978349 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 21 04:44:10.978358 kernel: signal: max sigframe size: 11952 Jun 21 04:44:10.978366 kernel: rcu: Hierarchical SRCU implementation. Jun 21 04:44:10.978375 kernel: rcu: Max phase no-delay instances is 400. Jun 21 04:44:10.978383 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 04:44:10.978392 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 21 04:44:10.978401 kernel: smp: Bringing up secondary CPUs ... Jun 21 04:44:10.978409 kernel: smpboot: x86: Booting SMP configuration: Jun 21 04:44:10.978418 kernel: .... node #0, CPUs: #1 Jun 21 04:44:10.978428 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 04:44:10.978437 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jun 21 04:44:10.978445 kernel: Memory: 8082308K/8387508K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299992K reserved, 0K cma-reserved) Jun 21 04:44:10.978454 kernel: devtmpfs: initialized Jun 21 04:44:10.978463 kernel: x86/mm: Memory block size: 128MB Jun 21 04:44:10.978471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 21 04:44:10.978480 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 04:44:10.978488 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 04:44:10.978497 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 04:44:10.978507 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 04:44:10.978515 kernel: audit: initializing netlink subsys (disabled) Jun 21 04:44:10.978524 kernel: audit: type=2000 audit(1750481047.028:1): state=initialized audit_enabled=0 res=1 Jun 21 04:44:10.978532 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 04:44:10.978541 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 04:44:10.978549 kernel: cpuidle: using governor menu Jun 21 04:44:10.978557 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 04:44:10.978566 kernel: dca service started, version 1.12.1 Jun 21 04:44:10.978574 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 21 04:44:10.978586 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] Jun 21 04:44:10.978594 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 04:44:10.978603 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 04:44:10.978611 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 04:44:10.978620 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 04:44:10.978628 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 04:44:10.978637 kernel: ACPI: Added _OSI(Module Device) Jun 21 04:44:10.978645 kernel: ACPI: Added _OSI(Processor Device) Jun 21 04:44:10.978654 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 04:44:10.978664 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 04:44:10.978672 kernel: ACPI: Interpreter enabled Jun 21 04:44:10.978681 kernel: ACPI: PM: (supports S0 S5) Jun 21 04:44:10.978689 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 04:44:10.978698 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 04:44:10.978707 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 21 04:44:10.978716 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 21 04:44:10.978724 kernel: iommu: Default domain type: Translated Jun 21 04:44:10.978733 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 04:44:10.978742 kernel: efivars: Registered efivars operations Jun 21 04:44:10.978751 kernel: PCI: Using ACPI for IRQ routing Jun 21 04:44:10.978760 kernel: PCI: System does not support PCI Jun 21 04:44:10.978768 kernel: vgaarb: loaded Jun 21 04:44:10.978775 kernel: clocksource: Switched to clocksource tsc-early Jun 21 04:44:10.978783 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 04:44:10.978791 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 04:44:10.978798 kernel: pnp: PnP ACPI init Jun 21 04:44:10.978806 kernel: pnp: PnP ACPI: found 3 devices Jun 21 04:44:10.978815 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 04:44:10.978823 kernel: NET: Registered PF_INET protocol family Jun 21 04:44:10.978831 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 21 04:44:10.978840 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 21 04:44:10.978849 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 04:44:10.978857 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 04:44:10.978866 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 21 04:44:10.978889 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 21 04:44:10.978898 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 21 04:44:10.978908 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 21 04:44:10.978916 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 04:44:10.978925 kernel: NET: Registered PF_XDP protocol family Jun 21 04:44:10.978933 kernel: PCI: CLS 0 bytes, default 64 Jun 21 04:44:10.978942 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 21 04:44:10.978950 kernel: software IO TLB: mapped [mem 0x000000003aa59000-0x000000003ea59000] (64MB) Jun 21 04:44:10.978959 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 21 04:44:10.978967 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 21 04:44:10.978975 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 21 04:44:10.980353 kernel: clocksource: Switched to clocksource tsc Jun 21 04:44:10.980363 kernel: Initialise system trusted keyrings Jun 21 04:44:10.980371 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 21 04:44:10.980379 kernel: Key type asymmetric registered Jun 21 04:44:10.980388 kernel: Asymmetric key parser 'x509' registered Jun 21 04:44:10.980396 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 04:44:10.980405 kernel: io scheduler mq-deadline registered Jun 21 04:44:10.980412 kernel: io scheduler kyber registered Jun 21 04:44:10.980420 kernel: io scheduler bfq registered Jun 21 04:44:10.980431 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 04:44:10.980439 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 04:44:10.980448 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 04:44:10.980456 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 21 04:44:10.980464 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 04:44:10.980473 kernel: i8042: PNP: No PS/2 controller found. Jun 21 04:44:10.980599 kernel: rtc_cmos 00:02: registered as rtc0 Jun 21 04:44:10.980666 kernel: rtc_cmos 00:02: setting system clock to 2025-06-21T04:44:10 UTC (1750481050) Jun 21 04:44:10.980729 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 21 04:44:10.980738 kernel: intel_pstate: Intel P-state driver initializing Jun 21 04:44:10.980746 kernel: efifb: probing for efifb Jun 21 04:44:10.980754 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 21 04:44:10.980761 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 21 04:44:10.980769 kernel: efifb: scrolling: redraw Jun 21 04:44:10.980776 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 21 04:44:10.980784 kernel: Console: switching to colour frame buffer device 128x48 Jun 21 04:44:10.980793 kernel: fb0: EFI VGA frame buffer device Jun 21 04:44:10.980801 kernel: pstore: Using crash dump compression: deflate Jun 21 04:44:10.980808 kernel: pstore: Registered efi_pstore as persistent store backend Jun 21 04:44:10.980815 kernel: NET: Registered PF_INET6 protocol family Jun 21 04:44:10.980822 kernel: Segment Routing with IPv6 Jun 21 04:44:10.980830 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 04:44:10.980838 kernel: NET: Registered PF_PACKET protocol family Jun 21 04:44:10.980845 kernel: Key type dns_resolver registered Jun 21 04:44:10.980852 kernel: IPI shorthand broadcast: enabled Jun 21 04:44:10.980861 kernel: sched_clock: Marking stable (2801003505, 92086936)->(3194839311, -301748870) Jun 21 04:44:10.980880 kernel: registered taskstats version 1 Jun 21 04:44:10.980888 kernel: Loading compiled-in X.509 certificates Jun 21 04:44:10.980895 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 04:44:10.980903 kernel: Demotion targets for Node 0: null Jun 21 04:44:10.980911 kernel: Key type .fscrypt registered Jun 21 04:44:10.980918 kernel: Key type fscrypt-provisioning registered Jun 21 04:44:10.980926 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 04:44:10.980933 kernel: ima: Allocated hash algorithm: sha1 Jun 21 04:44:10.980942 kernel: ima: No architecture policies found Jun 21 04:44:10.980949 kernel: clk: Disabling unused clocks Jun 21 04:44:10.980961 kernel: Warning: unable to open an initial console. Jun 21 04:44:10.980969 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 04:44:10.980976 kernel: Write protecting the kernel read-only data: 24576k Jun 21 04:44:10.980984 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 04:44:10.980992 kernel: Run /init as init process Jun 21 04:44:10.980999 kernel: with arguments: Jun 21 04:44:10.981006 kernel: /init Jun 21 04:44:10.981016 kernel: with environment: Jun 21 04:44:10.981022 kernel: HOME=/ Jun 21 04:44:10.981029 kernel: TERM=linux Jun 21 04:44:10.981036 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 04:44:10.981045 systemd[1]: Successfully made /usr/ read-only. Jun 21 04:44:10.981056 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 04:44:10.981065 systemd[1]: Detected virtualization microsoft. Jun 21 04:44:10.981073 systemd[1]: Detected architecture x86-64. Jun 21 04:44:10.981082 systemd[1]: Running in initrd. Jun 21 04:44:10.981090 systemd[1]: No hostname configured, using default hostname. Jun 21 04:44:10.981098 systemd[1]: Hostname set to . Jun 21 04:44:10.981105 systemd[1]: Initializing machine ID from random generator. Jun 21 04:44:10.981113 systemd[1]: Queued start job for default target initrd.target. Jun 21 04:44:10.981121 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:44:10.981129 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:44:10.981138 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 04:44:10.981148 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 04:44:10.981156 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 04:44:10.981165 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 04:44:10.981173 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 04:44:10.981181 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 04:44:10.981189 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:44:10.981249 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:44:10.981256 systemd[1]: Reached target paths.target - Path Units. Jun 21 04:44:10.981265 systemd[1]: Reached target slices.target - Slice Units. Jun 21 04:44:10.981273 systemd[1]: Reached target swap.target - Swaps. Jun 21 04:44:10.981281 systemd[1]: Reached target timers.target - Timer Units. Jun 21 04:44:10.981289 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 04:44:10.981297 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 04:44:10.981306 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 04:44:10.981313 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 04:44:10.981323 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:44:10.981331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 04:44:10.981339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:44:10.981347 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 04:44:10.981355 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 04:44:10.981363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 04:44:10.981371 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 04:44:10.981379 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 04:44:10.981389 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 04:44:10.981397 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 04:44:10.981405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 04:44:10.981413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:10.981430 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 04:44:10.981456 systemd-journald[206]: Collecting audit messages is disabled. Jun 21 04:44:10.981475 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:44:10.981539 systemd-journald[206]: Journal started Jun 21 04:44:10.981560 systemd-journald[206]: Runtime Journal (/run/log/journal/a3ad36fdf515406bad57d1e2d7f39da4) is 8M, max 159M, 151M free. Jun 21 04:44:10.985903 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 04:44:10.985649 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 04:44:10.987384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 04:44:10.991032 systemd-modules-load[207]: Inserted module 'overlay' Jun 21 04:44:10.999095 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 04:44:11.009506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:11.017237 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 04:44:11.026972 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 04:44:11.027334 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 04:44:11.032125 kernel: Bridge firewalling registered Jun 21 04:44:11.031883 systemd-modules-load[207]: Inserted module 'br_netfilter' Jun 21 04:44:11.033245 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 04:44:11.035148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 04:44:11.035517 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:44:11.037971 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:44:11.039972 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 04:44:11.053791 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:44:11.054455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:44:11.056974 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 04:44:11.069975 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 04:44:11.070790 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 04:44:11.084053 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:11.114555 systemd-resolved[238]: Positive Trust Anchors: Jun 21 04:44:11.114570 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 04:44:11.114602 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 04:44:11.132630 systemd-resolved[238]: Defaulting to hostname 'linux'. Jun 21 04:44:11.135131 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 04:44:11.137507 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:44:11.153884 kernel: SCSI subsystem initialized Jun 21 04:44:11.161885 kernel: Loading iSCSI transport class v2.0-870. Jun 21 04:44:11.169887 kernel: iscsi: registered transport (tcp) Jun 21 04:44:11.185161 kernel: iscsi: registered transport (qla4xxx) Jun 21 04:44:11.185202 kernel: QLogic iSCSI HBA Driver Jun 21 04:44:11.196953 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 04:44:11.204853 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:44:11.209773 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 04:44:11.237660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 04:44:11.241217 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 04:44:11.282886 kernel: raid6: avx512x4 gen() 46417 MB/s Jun 21 04:44:11.300882 kernel: raid6: avx512x2 gen() 46366 MB/s Jun 21 04:44:11.317880 kernel: raid6: avx512x1 gen() 29339 MB/s Jun 21 04:44:11.335881 kernel: raid6: avx2x4 gen() 41378 MB/s Jun 21 04:44:11.352880 kernel: raid6: avx2x2 gen() 44520 MB/s Jun 21 04:44:11.370112 kernel: raid6: avx2x1 gen() 31101 MB/s Jun 21 04:44:11.370193 kernel: raid6: using algorithm avx512x4 gen() 46417 MB/s Jun 21 04:44:11.389058 kernel: raid6: .... xor() 8030 MB/s, rmw enabled Jun 21 04:44:11.389083 kernel: raid6: using avx512x2 recovery algorithm Jun 21 04:44:11.405892 kernel: xor: automatically using best checksumming function avx Jun 21 04:44:11.509884 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 04:44:11.513996 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 04:44:11.515972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:44:11.532425 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jun 21 04:44:11.535886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:44:11.542382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 04:44:11.557083 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jun 21 04:44:11.573377 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 04:44:11.575968 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 04:44:11.604056 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:44:11.610434 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 04:44:11.647890 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 04:44:11.655883 kernel: hv_vmbus: Vmbus version:5.3 Jun 21 04:44:11.668890 kernel: AES CTR mode by8 optimization enabled Jun 21 04:44:11.678355 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 21 04:44:11.678384 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 21 04:44:11.682887 kernel: PTP clock support registered Jun 21 04:44:11.696227 kernel: hv_utils: Registering HyperV Utility Driver Jun 21 04:44:11.696271 kernel: hv_vmbus: registering driver hv_utils Jun 21 04:44:11.706890 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 21 04:44:11.707540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:11.760347 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:11.768943 kernel: hv_utils: Shutdown IC version 3.2 Jun 21 04:44:11.768972 kernel: hv_utils: Heartbeat IC version 3.0 Jun 21 04:44:11.768985 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 21 04:44:11.768995 kernel: hv_utils: TimeSync IC version 4.0 Jun 21 04:44:11.769006 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 21 04:44:11.624009 systemd-resolved[238]: Clock change detected. Flushing caches. Jun 21 04:44:11.637009 systemd-journald[206]: Time jumped backwards, rotating. Jun 21 04:44:11.637058 kernel: hv_vmbus: registering driver hid_hyperv Jun 21 04:44:11.637068 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 21 04:44:11.637077 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 21 04:44:11.642489 kernel: hv_vmbus: registering driver hv_pci Jun 21 04:44:11.629254 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:11.648226 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 21 04:44:11.636562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:11.649368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:11.654252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:11.663078 kernel: hv_vmbus: registering driver hv_storvsc Jun 21 04:44:11.663095 kernel: hv_vmbus: registering driver hv_netvsc Jun 21 04:44:11.661522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:11.667171 kernel: scsi host0: storvsc_host_t Jun 21 04:44:11.667215 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 21 04:44:11.669761 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 21 04:44:11.669873 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 21 04:44:11.671164 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 21 04:44:11.673465 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 21 04:44:11.684011 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 21 04:44:11.688175 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa50e (unnamed net_device) (uninitialized): VF slot 1 added Jun 21 04:44:11.699153 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 21 04:44:11.709061 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 21 04:44:11.709224 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 21 04:44:11.715077 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 21 04:44:11.715292 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 21 04:44:11.718148 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 21 04:44:11.720728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:11.734038 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 21 04:44:11.734243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#274 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:44:11.735627 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 21 04:44:11.751159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#298 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:44:11.989149 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 21 04:44:11.993147 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:12.228177 kernel: nvme nvme0: using unchecked data buffer Jun 21 04:44:12.398225 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 21 04:44:12.408856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 21 04:44:12.462370 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 21 04:44:12.494038 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 21 04:44:12.497501 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 21 04:44:12.500514 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 04:44:12.504727 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 04:44:12.505327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:44:12.505352 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 04:44:12.507246 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 04:44:12.508189 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 04:44:12.524385 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 04:44:12.534869 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:12.709867 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 21 04:44:12.710064 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 21 04:44:12.712289 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 21 04:44:12.713757 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 21 04:44:12.717322 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 21 04:44:12.721317 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 21 04:44:12.725160 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 21 04:44:12.725179 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 21 04:44:12.740173 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 21 04:44:12.740329 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 21 04:44:12.741414 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 21 04:44:12.747191 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 21 04:44:12.756919 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jun 21 04:44:12.757105 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa50e eth0: VF registering: eth1 Jun 21 04:44:12.758622 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jun 21 04:44:12.762156 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jun 21 04:44:13.547313 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:13.547363 disk-uuid[689]: The operation has completed successfully. Jun 21 04:44:13.594504 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 04:44:13.594582 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 04:44:13.623890 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 04:44:13.640031 sh[725]: Success Jun 21 04:44:13.665343 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 04:44:13.665397 kernel: device-mapper: uevent: version 1.0.3 Jun 21 04:44:13.666617 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 04:44:13.674152 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 21 04:44:13.979074 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 04:44:13.982623 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 04:44:13.996818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 04:44:14.009180 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 04:44:14.009238 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (738) Jun 21 04:44:14.014571 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 04:44:14.014604 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:44:14.015532 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 04:44:14.327872 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 04:44:14.331038 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 04:44:14.336258 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 04:44:14.338240 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 04:44:14.350688 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 04:44:14.373164 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (769) Jun 21 04:44:14.377790 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:14.377825 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:44:14.380006 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:44:14.414211 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 04:44:14.415679 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 04:44:14.428148 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:14.428190 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 04:44:14.432250 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 04:44:14.446833 systemd-networkd[901]: lo: Link UP Jun 21 04:44:14.446838 systemd-networkd[901]: lo: Gained carrier Jun 21 04:44:14.448241 systemd-networkd[901]: Enumeration completed Jun 21 04:44:14.459224 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 21 04:44:14.459351 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 21 04:44:14.459422 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa50e eth0: Data path switched to VF: enP30832s1 Jun 21 04:44:14.448570 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:14.448573 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:44:14.449002 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 04:44:14.453398 systemd[1]: Reached target network.target - Network. Jun 21 04:44:14.457327 systemd-networkd[901]: enP30832s1: Link UP Jun 21 04:44:14.457385 systemd-networkd[901]: eth0: Link UP Jun 21 04:44:14.457463 systemd-networkd[901]: eth0: Gained carrier Jun 21 04:44:14.457472 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:14.461295 systemd-networkd[901]: enP30832s1: Gained carrier Jun 21 04:44:14.472182 systemd-networkd[901]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:44:15.415693 ignition[908]: Ignition 2.21.0 Jun 21 04:44:15.415704 ignition[908]: Stage: fetch-offline Jun 21 04:44:15.417612 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 04:44:15.415790 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:15.421946 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 04:44:15.415798 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:15.415886 ignition[908]: parsed url from cmdline: "" Jun 21 04:44:15.415889 ignition[908]: no config URL provided Jun 21 04:44:15.415893 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 04:44:15.415899 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jun 21 04:44:15.415904 ignition[908]: failed to fetch config: resource requires networking Jun 21 04:44:15.416116 ignition[908]: Ignition finished successfully Jun 21 04:44:15.439499 ignition[918]: Ignition 2.21.0 Jun 21 04:44:15.439509 ignition[918]: Stage: fetch Jun 21 04:44:15.439680 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:15.439688 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:15.439748 ignition[918]: parsed url from cmdline: "" Jun 21 04:44:15.439750 ignition[918]: no config URL provided Jun 21 04:44:15.439754 ignition[918]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 04:44:15.439758 ignition[918]: no config at "/usr/lib/ignition/user.ign" Jun 21 04:44:15.439785 ignition[918]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 21 04:44:15.497864 ignition[918]: GET result: OK Jun 21 04:44:15.497926 ignition[918]: config has been read from IMDS userdata Jun 21 04:44:15.497949 ignition[918]: parsing config with SHA512: d38096995d66e5680fbdc01f60a307265d95b80c210bf844f8e93a2004fd9dc00e8b58f49328f2f4eaec03a130e43011b13c265ed58f7a09d4521dbaa1b8e0e0 Jun 21 04:44:15.503301 unknown[918]: fetched base config from "system" Jun 21 04:44:15.503308 unknown[918]: fetched base config from "system" Jun 21 04:44:15.503613 ignition[918]: fetch: fetch complete Jun 21 04:44:15.503312 unknown[918]: fetched user config from "azure" Jun 21 04:44:15.503617 ignition[918]: fetch: fetch passed Jun 21 04:44:15.505654 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 04:44:15.503645 ignition[918]: Ignition finished successfully Jun 21 04:44:15.508261 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 04:44:15.529489 ignition[924]: Ignition 2.21.0 Jun 21 04:44:15.529497 ignition[924]: Stage: kargs Jun 21 04:44:15.529667 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:15.529674 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:15.533887 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 04:44:15.532614 ignition[924]: kargs: kargs passed Jun 21 04:44:15.532655 ignition[924]: Ignition finished successfully Jun 21 04:44:15.540813 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 04:44:15.557684 ignition[930]: Ignition 2.21.0 Jun 21 04:44:15.557693 ignition[930]: Stage: disks Jun 21 04:44:15.557846 ignition[930]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:15.559662 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 04:44:15.557853 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:15.562293 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 04:44:15.558481 ignition[930]: disks: disks passed Jun 21 04:44:15.565879 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 04:44:15.558509 ignition[930]: Ignition finished successfully Jun 21 04:44:15.574346 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 04:44:15.577182 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 04:44:15.579574 systemd[1]: Reached target basic.target - Basic System. Jun 21 04:44:15.581523 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 04:44:15.661973 systemd-fsck[939]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 21 04:44:15.673315 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 04:44:15.677199 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 04:44:15.871373 systemd-networkd[901]: enP30832s1: Gained IPv6LL Jun 21 04:44:16.044154 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 04:44:16.044275 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 04:44:16.046054 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 04:44:16.063225 systemd-networkd[901]: eth0: Gained IPv6LL Jun 21 04:44:16.066845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 04:44:16.083468 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 04:44:16.088093 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 21 04:44:16.094243 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 04:44:16.102506 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (948) Jun 21 04:44:16.102536 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:16.094273 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 04:44:16.107399 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:44:16.107418 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:44:16.100802 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 04:44:16.107127 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 04:44:16.112240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 04:44:16.658008 coreos-metadata[950]: Jun 21 04:44:16.657 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 21 04:44:16.661250 coreos-metadata[950]: Jun 21 04:44:16.661 INFO Fetch successful Jun 21 04:44:16.662363 coreos-metadata[950]: Jun 21 04:44:16.661 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 21 04:44:16.671413 coreos-metadata[950]: Jun 21 04:44:16.671 INFO Fetch successful Jun 21 04:44:16.684448 coreos-metadata[950]: Jun 21 04:44:16.684 INFO wrote hostname ci-4372.0.0-a-c1262e9e80 to /sysroot/etc/hostname Jun 21 04:44:16.687527 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 04:44:16.718623 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 04:44:16.735271 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Jun 21 04:44:16.739969 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 04:44:16.744061 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 04:44:17.709069 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 04:44:17.712312 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 04:44:17.722243 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 04:44:17.728363 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 04:44:17.730720 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:17.752094 ignition[1066]: INFO : Ignition 2.21.0 Jun 21 04:44:17.752094 ignition[1066]: INFO : Stage: mount Jun 21 04:44:17.756632 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:17.756632 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:17.755671 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 04:44:17.766195 ignition[1066]: INFO : mount: mount passed Jun 21 04:44:17.766195 ignition[1066]: INFO : Ignition finished successfully Jun 21 04:44:17.761384 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 04:44:17.764213 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 04:44:17.776024 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 04:44:17.793471 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1079) Jun 21 04:44:17.793563 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:17.794417 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:44:17.795266 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:44:17.799873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 04:44:17.820394 ignition[1095]: INFO : Ignition 2.21.0 Jun 21 04:44:17.820394 ignition[1095]: INFO : Stage: files Jun 21 04:44:17.822680 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:17.822680 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:17.822680 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Jun 21 04:44:17.835177 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 04:44:17.837244 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 04:44:17.872330 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 04:44:17.875214 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 04:44:17.875214 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 04:44:17.873038 unknown[1095]: wrote ssh authorized keys file for user: core Jun 21 04:44:17.890538 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 04:44:17.894182 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 21 04:44:18.187038 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 04:44:18.380597 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 04:44:18.383530 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 21 04:44:18.383530 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 04:44:18.383530 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 04:44:18.383530 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 04:44:18.383530 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 04:44:18.383530 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 04:44:18.383530 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 04:44:18.383530 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 04:44:18.410165 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 04:44:18.410165 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 04:44:18.410165 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 04:44:18.410165 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 04:44:18.410165 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 04:44:18.410165 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 21 04:44:19.212120 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 21 04:44:19.839945 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 04:44:19.842616 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 21 04:44:19.868349 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 04:44:19.874140 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 04:44:19.874140 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 21 04:44:19.880240 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 21 04:44:19.880240 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 04:44:19.880240 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 04:44:19.880240 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 04:44:19.880240 ignition[1095]: INFO : files: files passed Jun 21 04:44:19.880240 ignition[1095]: INFO : Ignition finished successfully Jun 21 04:44:19.877424 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 04:44:19.885515 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 04:44:19.895567 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 04:44:19.899659 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 04:44:19.899974 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 04:44:19.918430 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:44:19.918430 initrd-setup-root-after-ignition[1126]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:44:19.923122 initrd-setup-root-after-ignition[1130]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:44:19.925338 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 04:44:19.929514 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 04:44:19.931985 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 04:44:19.967763 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 04:44:19.967836 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 04:44:19.970366 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 04:44:19.974184 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 04:44:19.978222 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 04:44:19.978774 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 04:44:19.997101 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 04:44:19.999239 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 04:44:20.020175 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:44:20.020902 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:44:20.028302 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 04:44:20.028467 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 04:44:20.028580 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 04:44:20.033277 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 04:44:20.035415 systemd[1]: Stopped target basic.target - Basic System. Jun 21 04:44:20.039564 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 04:44:20.042869 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 04:44:20.046727 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 04:44:20.049571 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 04:44:20.055122 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 04:44:20.058171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 04:44:20.062108 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 04:44:20.065440 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 04:44:20.076521 systemd[1]: Stopped target swap.target - Swaps. Jun 21 04:44:20.079464 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 04:44:20.079572 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 04:44:20.087404 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:44:20.087840 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:44:20.093730 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 04:44:20.093977 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:44:20.094042 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 04:44:20.094150 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 04:44:20.097285 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 04:44:20.097416 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 04:44:20.100303 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 04:44:20.100413 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 04:44:20.104290 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 21 04:44:20.104398 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 04:44:20.109707 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 04:44:20.112730 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 04:44:20.112840 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:44:20.126290 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 04:44:20.133160 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 04:44:20.133287 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:44:20.141309 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 04:44:20.144564 ignition[1150]: INFO : Ignition 2.21.0 Jun 21 04:44:20.144564 ignition[1150]: INFO : Stage: umount Jun 21 04:44:20.144564 ignition[1150]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:20.144564 ignition[1150]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:20.144564 ignition[1150]: INFO : umount: umount passed Jun 21 04:44:20.144564 ignition[1150]: INFO : Ignition finished successfully Jun 21 04:44:20.141434 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 04:44:20.148960 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 04:44:20.149820 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 04:44:20.160629 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 04:44:20.160698 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 04:44:20.166484 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 04:44:20.166560 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 04:44:20.169221 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 04:44:20.169260 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 04:44:20.171518 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 04:44:20.171549 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 04:44:20.174209 systemd[1]: Stopped target network.target - Network. Jun 21 04:44:20.176058 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 04:44:20.176095 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 04:44:20.180194 systemd[1]: Stopped target paths.target - Path Units. Jun 21 04:44:20.182180 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 04:44:20.184038 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:44:20.187040 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 04:44:20.190155 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 04:44:20.195481 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 04:44:20.195512 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 04:44:20.197315 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 04:44:20.197345 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 04:44:20.201829 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 04:44:20.201881 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 04:44:20.204193 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 04:44:20.204226 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 04:44:20.208281 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 04:44:20.212229 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 04:44:20.219466 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 04:44:20.219537 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 04:44:20.223614 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 04:44:20.223766 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 04:44:20.223836 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 04:44:20.241041 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 04:44:20.241629 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 04:44:20.243879 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 04:44:20.243908 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:44:20.244610 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 04:44:20.256442 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 04:44:20.256494 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 04:44:20.259434 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 04:44:20.259480 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:44:20.263079 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 04:44:20.263125 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 04:44:20.266205 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 04:44:20.266241 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:44:20.270390 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:44:20.274406 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 04:44:20.274456 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 04:44:20.294149 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa50e eth0: Data path switched from VF: enP30832s1 Jun 21 04:44:20.294294 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 21 04:44:20.295167 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 04:44:20.295616 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 04:44:20.295744 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:44:20.299258 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 04:44:20.299336 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 04:44:20.301973 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 04:44:20.302014 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 04:44:20.303650 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 04:44:20.303674 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:44:20.307433 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 04:44:20.307470 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 04:44:20.313687 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 04:44:20.313733 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 04:44:20.325181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 04:44:20.325235 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 04:44:20.329368 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 04:44:20.331712 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 04:44:20.332916 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:44:20.333846 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 04:44:20.334919 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:44:20.339243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:20.339277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:20.345953 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 21 04:44:20.345994 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 21 04:44:20.346024 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 04:44:20.351369 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 04:44:20.351431 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 04:44:20.692131 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 04:44:20.692278 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 04:44:20.696132 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 04:44:20.700232 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 04:44:20.700292 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 04:44:20.707210 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 04:44:20.722631 systemd[1]: Switching root. Jun 21 04:44:20.776998 systemd-journald[206]: Journal stopped Jun 21 04:44:24.172051 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Jun 21 04:44:24.172087 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 04:44:24.172099 kernel: SELinux: policy capability open_perms=1 Jun 21 04:44:24.172107 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 04:44:24.172115 kernel: SELinux: policy capability always_check_network=0 Jun 21 04:44:24.172123 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 04:44:24.173163 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 04:44:24.173181 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 04:44:24.173189 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 04:44:24.173198 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 04:44:24.173206 kernel: audit: type=1403 audit(1750481061.598:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 04:44:24.173217 systemd[1]: Successfully loaded SELinux policy in 109.442ms. Jun 21 04:44:24.173227 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.856ms. Jun 21 04:44:24.173240 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 04:44:24.173251 systemd[1]: Detected virtualization microsoft. Jun 21 04:44:24.173260 systemd[1]: Detected architecture x86-64. Jun 21 04:44:24.173269 systemd[1]: Detected first boot. Jun 21 04:44:24.173278 systemd[1]: Hostname set to . Jun 21 04:44:24.173289 systemd[1]: Initializing machine ID from random generator. Jun 21 04:44:24.173298 zram_generator::config[1193]: No configuration found. Jun 21 04:44:24.173307 kernel: Guest personality initialized and is inactive Jun 21 04:44:24.173316 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 21 04:44:24.173323 kernel: Initialized host personality Jun 21 04:44:24.173332 kernel: NET: Registered PF_VSOCK protocol family Jun 21 04:44:24.173341 systemd[1]: Populated /etc with preset unit settings. Jun 21 04:44:24.173353 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 04:44:24.173362 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 04:44:24.173371 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 04:44:24.173380 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 04:44:24.173389 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 04:44:24.173399 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 04:44:24.173408 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 04:44:24.173419 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 04:44:24.173428 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 04:44:24.173438 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 04:44:24.173447 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 04:44:24.173456 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 04:44:24.173466 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:44:24.173474 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:44:24.173483 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 04:44:24.173495 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 04:44:24.173506 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 04:44:24.173517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 04:44:24.173526 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 04:44:24.173536 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:44:24.173545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:44:24.173554 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 04:44:24.173563 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 04:44:24.173574 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 04:44:24.173584 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 04:44:24.173593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:44:24.173603 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 04:44:24.173613 systemd[1]: Reached target slices.target - Slice Units. Jun 21 04:44:24.173623 systemd[1]: Reached target swap.target - Swaps. Jun 21 04:44:24.173632 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 04:44:24.173641 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 04:44:24.173653 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 04:44:24.173663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:44:24.173672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 04:44:24.173682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:44:24.173692 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 04:44:24.173702 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 04:44:24.173712 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 04:44:24.173720 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 04:44:24.173729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:24.173741 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 04:44:24.173750 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 04:44:24.173760 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 04:44:24.173770 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 04:44:24.173781 systemd[1]: Reached target machines.target - Containers. Jun 21 04:44:24.173790 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 04:44:24.173800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:44:24.173808 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 04:44:24.173818 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 04:44:24.173827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:44:24.173837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 04:44:24.173847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:44:24.173858 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 04:44:24.173867 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:44:24.173877 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 04:44:24.173886 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 04:44:24.173896 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 04:44:24.173906 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 04:44:24.173916 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 04:44:24.173926 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:44:24.173937 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 04:44:24.173948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 04:44:24.173957 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 04:44:24.173966 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 04:44:24.173976 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 04:44:24.173985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 04:44:24.173995 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 04:44:24.174005 systemd[1]: Stopped verity-setup.service. Jun 21 04:44:24.174015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:24.174025 kernel: loop: module loaded Jun 21 04:44:24.174035 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 04:44:24.174044 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 04:44:24.174054 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 04:44:24.174063 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 04:44:24.174072 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 04:44:24.174081 kernel: fuse: init (API version 7.41) Jun 21 04:44:24.174090 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 04:44:24.174121 systemd-journald[1293]: Collecting audit messages is disabled. Jun 21 04:44:24.191001 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 04:44:24.191018 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:44:24.191030 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 04:44:24.191044 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 04:44:24.191054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:44:24.191068 systemd-journald[1293]: Journal started Jun 21 04:44:24.191093 systemd-journald[1293]: Runtime Journal (/run/log/journal/d121c90aad8b43fe8ec90345da7a3da9) is 8M, max 159M, 151M free. Jun 21 04:44:23.741198 systemd[1]: Queued start job for default target multi-user.target. Jun 21 04:44:23.745620 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 21 04:44:23.745942 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 04:44:24.197151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:44:24.202001 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 04:44:24.202860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:44:24.203073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:44:24.205981 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 04:44:24.206237 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 04:44:24.208544 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:44:24.208724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:44:24.211206 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 04:44:24.215481 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:44:24.218434 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 04:44:24.222371 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 04:44:24.225372 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:44:24.234797 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 04:44:24.238249 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 04:44:24.242872 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 04:44:24.245553 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 04:44:24.245583 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 04:44:24.248316 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 04:44:24.257243 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 04:44:24.259335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:44:24.261223 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 04:44:24.264777 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 04:44:24.266620 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 04:44:24.269744 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 04:44:24.272514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 04:44:24.273903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:44:24.277206 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 04:44:24.280977 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 04:44:24.284544 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 04:44:24.289090 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 04:44:24.294161 kernel: ACPI: bus type drm_connector registered Jun 21 04:44:24.297900 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 04:44:24.298045 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 04:44:24.304320 systemd-journald[1293]: Time spent on flushing to /var/log/journal/d121c90aad8b43fe8ec90345da7a3da9 is 30.715ms for 985 entries. Jun 21 04:44:24.304320 systemd-journald[1293]: System Journal (/var/log/journal/d121c90aad8b43fe8ec90345da7a3da9) is 11.8M, max 2.6G, 2.6G free. Jun 21 04:44:24.364566 systemd-journald[1293]: Received client request to flush runtime journal. Jun 21 04:44:24.365333 systemd-journald[1293]: /var/log/journal/d121c90aad8b43fe8ec90345da7a3da9/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 21 04:44:24.365381 systemd-journald[1293]: Rotating system journal. Jun 21 04:44:24.365404 kernel: loop0: detected capacity change from 0 to 113872 Jun 21 04:44:24.309461 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 04:44:24.313556 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 04:44:24.317500 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 04:44:24.347430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:44:24.365032 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 04:44:24.367277 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 04:44:24.488100 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 04:44:24.493159 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 04:44:24.621811 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Jun 21 04:44:24.621825 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Jun 21 04:44:24.638291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:44:24.734154 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 04:44:24.747177 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 04:44:24.764158 kernel: loop1: detected capacity change from 0 to 28496 Jun 21 04:44:25.020833 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 04:44:25.023323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:44:25.049538 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Jun 21 04:44:25.070155 kernel: loop2: detected capacity change from 0 to 224512 Jun 21 04:44:25.112147 kernel: loop3: detected capacity change from 0 to 146240 Jun 21 04:44:25.220535 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:44:25.224020 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 04:44:25.277882 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 04:44:25.284560 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 04:44:25.331168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#221 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:44:25.373063 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 04:44:25.386156 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 04:44:25.407157 kernel: hv_vmbus: registering driver hv_balloon Jun 21 04:44:25.410203 kernel: hv_vmbus: registering driver hyperv_fb Jun 21 04:44:25.410248 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 21 04:44:25.414155 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 21 04:44:25.416158 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 21 04:44:25.417269 kernel: Console: switching to colour dummy device 80x25 Jun 21 04:44:25.422481 kernel: Console: switching to colour frame buffer device 128x48 Jun 21 04:44:25.474160 kernel: loop4: detected capacity change from 0 to 113872 Jun 21 04:44:25.498169 kernel: loop5: detected capacity change from 0 to 28496 Jun 21 04:44:25.506169 systemd-networkd[1362]: lo: Link UP Jun 21 04:44:25.506181 systemd-networkd[1362]: lo: Gained carrier Jun 21 04:44:25.510095 systemd-networkd[1362]: Enumeration completed Jun 21 04:44:25.510206 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 04:44:25.514684 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:25.514692 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:44:25.515009 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 04:44:25.519911 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 04:44:25.523212 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 21 04:44:25.525161 kernel: loop6: detected capacity change from 0 to 224512 Jun 21 04:44:25.530170 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 21 04:44:25.530963 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa50e eth0: Data path switched to VF: enP30832s1 Jun 21 04:44:25.534542 systemd-networkd[1362]: enP30832s1: Link UP Jun 21 04:44:25.534617 systemd-networkd[1362]: eth0: Link UP Jun 21 04:44:25.534625 systemd-networkd[1362]: eth0: Gained carrier Jun 21 04:44:25.534638 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:25.538386 systemd-networkd[1362]: enP30832s1: Gained carrier Jun 21 04:44:25.543456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:25.546802 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:44:25.552163 kernel: loop7: detected capacity change from 0 to 146240 Jun 21 04:44:25.557321 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:25.557632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:25.561343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:25.577627 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 04:44:25.580568 (sd-merge)[1428]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 21 04:44:25.582901 (sd-merge)[1428]: Merged extensions into '/usr'. Jun 21 04:44:25.613277 systemd[1]: Reload requested from client PID 1334 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 04:44:25.613383 systemd[1]: Reloading... Jun 21 04:44:25.724170 zram_generator::config[1477]: No configuration found. Jun 21 04:44:25.800152 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 21 04:44:25.825059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:44:25.914629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 21 04:44:25.916492 systemd[1]: Reloading finished in 302 ms. Jun 21 04:44:25.941684 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 04:44:25.944434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:25.967886 systemd[1]: Starting ensure-sysext.service... Jun 21 04:44:25.970951 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 04:44:25.974984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 04:44:25.978249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:25.979779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:25.982021 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:25.985818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:25.996937 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 04:44:25.997865 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 04:44:25.998555 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 04:44:25.998790 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 04:44:25.999001 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 04:44:25.999654 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 04:44:25.999849 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Jun 21 04:44:25.999887 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Jun 21 04:44:26.000537 systemd[1]: Reload requested from client PID 1541 ('systemctl') (unit ensure-sysext.service)... Jun 21 04:44:26.000548 systemd[1]: Reloading... Jun 21 04:44:26.005847 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 04:44:26.005857 systemd-tmpfiles[1543]: Skipping /boot Jun 21 04:44:26.016283 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 04:44:26.018165 systemd-tmpfiles[1543]: Skipping /boot Jun 21 04:44:26.059167 zram_generator::config[1578]: No configuration found. Jun 21 04:44:26.137034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:44:26.225303 systemd[1]: Reloading finished in 224 ms. Jun 21 04:44:26.244049 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 04:44:26.248462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:44:26.252438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:26.259120 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:44:26.266537 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 04:44:26.270657 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 04:44:26.274518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 04:44:26.281110 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 04:44:26.285609 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:26.285754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:44:26.291741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:44:26.297571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:44:26.302232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:44:26.305273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:44:26.305391 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:44:26.305480 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:26.309732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:44:26.312591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:44:26.320414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:44:26.320554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:44:26.324874 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:44:26.325150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:44:26.335374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:26.335590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:44:26.337569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:44:26.342417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:44:26.346413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:44:26.350264 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:44:26.350431 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:44:26.350814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:26.352625 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 04:44:26.357170 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 04:44:26.367185 systemd[1]: Finished ensure-sysext.service. Jun 21 04:44:26.372410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:44:26.372573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:44:26.375435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:44:26.375566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:44:26.378449 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:44:26.378574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:44:26.382096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:26.382618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:44:26.383472 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 04:44:26.386765 augenrules[1684]: No rules Jun 21 04:44:26.387324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:44:26.387446 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:44:26.387480 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 04:44:26.387515 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 04:44:26.387545 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 04:44:26.389204 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:26.389457 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:44:26.393284 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:44:26.396442 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 04:44:26.396558 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 04:44:26.401383 systemd-resolved[1647]: Positive Trust Anchors: Jun 21 04:44:26.401391 systemd-resolved[1647]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 04:44:26.401419 systemd-resolved[1647]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 04:44:26.404735 systemd-resolved[1647]: Using system hostname 'ci-4372.0.0-a-c1262e9e80'. Jun 21 04:44:26.405729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 04:44:26.407332 systemd[1]: Reached target network.target - Network. Jun 21 04:44:26.410218 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:44:26.715633 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 04:44:26.718331 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 04:44:27.327267 systemd-networkd[1362]: enP30832s1: Gained IPv6LL Jun 21 04:44:27.519239 systemd-networkd[1362]: eth0: Gained IPv6LL Jun 21 04:44:27.520839 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 04:44:27.524365 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 04:44:28.771730 ldconfig[1329]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 04:44:28.782726 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 04:44:28.786300 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 04:44:28.802734 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 04:44:28.805331 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 04:44:28.806805 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 04:44:28.809234 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 04:44:28.810547 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 04:44:28.813269 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 04:44:28.814573 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 04:44:28.817171 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 04:44:28.820180 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 04:44:28.820208 systemd[1]: Reached target paths.target - Path Units. Jun 21 04:44:28.822175 systemd[1]: Reached target timers.target - Timer Units. Jun 21 04:44:28.824683 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 04:44:28.828030 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 04:44:28.832014 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 04:44:28.833432 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 04:44:28.834714 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 04:44:28.838413 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 04:44:28.842452 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 04:44:28.845689 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 04:44:28.847361 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 04:44:28.850197 systemd[1]: Reached target basic.target - Basic System. Jun 21 04:44:28.852208 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 04:44:28.852231 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 04:44:28.854006 systemd[1]: Starting chronyd.service - NTP client/server... Jun 21 04:44:28.855865 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 04:44:28.868255 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 04:44:28.871654 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 04:44:28.876243 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 04:44:28.879265 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 04:44:28.882254 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 04:44:28.884189 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 04:44:28.889269 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 04:44:28.891221 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 21 04:44:28.892052 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 21 04:44:28.893962 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 21 04:44:28.900109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:44:28.905406 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 04:44:28.909273 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 04:44:28.910995 KVP[1709]: KVP starting; pid is:1709 Jun 21 04:44:28.911788 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 04:44:28.914222 jq[1706]: false Jun 21 04:44:28.919114 KVP[1709]: KVP LIC Version: 3.1 Jun 21 04:44:28.919199 kernel: hv_utils: KVP IC version 4.0 Jun 21 04:44:28.921059 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 04:44:28.924748 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 04:44:28.933646 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 04:44:28.936306 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 04:44:28.936531 oslogin_cache_refresh[1708]: Refreshing passwd entry cache Jun 21 04:44:28.937214 google_oslogin_nss_cache[1708]: oslogin_cache_refresh[1708]: Refreshing passwd entry cache Jun 21 04:44:28.936665 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 04:44:28.937567 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 04:44:28.948528 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 04:44:28.955646 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 04:44:28.957640 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 04:44:28.957796 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 04:44:28.961782 extend-filesystems[1707]: Found /dev/nvme0n1p6 Jun 21 04:44:28.966490 extend-filesystems[1707]: Found /dev/nvme0n1p9 Jun 21 04:44:28.965803 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 04:44:28.981483 extend-filesystems[1707]: Checking size of /dev/nvme0n1p9 Jun 21 04:44:28.976524 oslogin_cache_refresh[1708]: Failure getting users, quitting Jun 21 04:44:28.984377 google_oslogin_nss_cache[1708]: oslogin_cache_refresh[1708]: Failure getting users, quitting Jun 21 04:44:28.984377 google_oslogin_nss_cache[1708]: oslogin_cache_refresh[1708]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 04:44:28.984377 google_oslogin_nss_cache[1708]: oslogin_cache_refresh[1708]: Refreshing group entry cache Jun 21 04:44:28.966245 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 04:44:28.984485 jq[1723]: true Jun 21 04:44:28.976539 oslogin_cache_refresh[1708]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 04:44:28.976574 oslogin_cache_refresh[1708]: Refreshing group entry cache Jun 21 04:44:28.989177 google_oslogin_nss_cache[1708]: oslogin_cache_refresh[1708]: Failure getting groups, quitting Jun 21 04:44:28.989177 google_oslogin_nss_cache[1708]: oslogin_cache_refresh[1708]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 04:44:28.985239 oslogin_cache_refresh[1708]: Failure getting groups, quitting Jun 21 04:44:28.985248 oslogin_cache_refresh[1708]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 04:44:28.989710 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 04:44:28.989893 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 04:44:28.995094 (chronyd)[1698]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 21 04:44:29.004454 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 04:44:29.009884 update_engine[1720]: I20250621 04:44:29.005024 1720 main.cc:92] Flatcar Update Engine starting Jun 21 04:44:29.007676 (ntainerd)[1744]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 04:44:29.010454 chronyd[1756]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 21 04:44:29.014796 jq[1745]: true Jun 21 04:44:29.018024 chronyd[1756]: Timezone right/UTC failed leap second check, ignoring Jun 21 04:44:29.018819 chronyd[1756]: Loaded seccomp filter (level 2) Jun 21 04:44:29.021606 systemd[1]: Started chronyd.service - NTP client/server. Jun 21 04:44:29.025225 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 04:44:29.027261 extend-filesystems[1707]: Old size kept for /dev/nvme0n1p9 Jun 21 04:44:29.028870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 04:44:29.033414 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 04:44:29.033595 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 04:44:29.072107 tar[1730]: linux-amd64/LICENSE Jun 21 04:44:29.073345 tar[1730]: linux-amd64/helm Jun 21 04:44:29.078694 dbus-daemon[1701]: [system] SELinux support is enabled Jun 21 04:44:29.080615 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 04:44:29.086042 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 04:44:29.086240 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 04:44:29.089184 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 04:44:29.089204 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 04:44:29.093768 update_engine[1720]: I20250621 04:44:29.093739 1720 update_check_scheduler.cc:74] Next update check in 5m11s Jun 21 04:44:29.095501 systemd[1]: Started update-engine.service - Update Engine. Jun 21 04:44:29.098987 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 04:44:29.120882 systemd-logind[1719]: New seat seat0. Jun 21 04:44:29.124009 systemd-logind[1719]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 04:44:29.124114 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 04:44:29.155201 bash[1784]: Updated "/home/core/.ssh/authorized_keys" Jun 21 04:44:29.160191 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 04:44:29.165641 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 21 04:44:29.166423 coreos-metadata[1700]: Jun 21 04:44:29.166 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 21 04:44:29.170207 coreos-metadata[1700]: Jun 21 04:44:29.169 INFO Fetch successful Jun 21 04:44:29.170207 coreos-metadata[1700]: Jun 21 04:44:29.170 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 21 04:44:29.174588 coreos-metadata[1700]: Jun 21 04:44:29.174 INFO Fetch successful Jun 21 04:44:29.174705 coreos-metadata[1700]: Jun 21 04:44:29.174 INFO Fetching http://168.63.129.16/machine/02fbb446-e5b6-4f81-8455-51fc613e349e/c2d5d3f3%2Dc106%2D44d9%2Da532%2D135c42f95330.%5Fci%2D4372.0.0%2Da%2Dc1262e9e80?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 21 04:44:29.176852 coreos-metadata[1700]: Jun 21 04:44:29.176 INFO Fetch successful Jun 21 04:44:29.177128 coreos-metadata[1700]: Jun 21 04:44:29.177 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 21 04:44:29.189224 coreos-metadata[1700]: Jun 21 04:44:29.188 INFO Fetch successful Jun 21 04:44:29.238325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 04:44:29.240827 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 04:44:29.449257 locksmithd[1785]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 04:44:29.807093 containerd[1744]: time="2025-06-21T04:44:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 04:44:29.809604 containerd[1744]: time="2025-06-21T04:44:29.809569715Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 04:44:29.840869 containerd[1744]: time="2025-06-21T04:44:29.840832446Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.515µs" Jun 21 04:44:29.840869 containerd[1744]: time="2025-06-21T04:44:29.840867330Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 04:44:29.840954 containerd[1744]: time="2025-06-21T04:44:29.840887122Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 04:44:29.841018 containerd[1744]: time="2025-06-21T04:44:29.841005595Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 04:44:29.841043 containerd[1744]: time="2025-06-21T04:44:29.841023901Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 04:44:29.844639 containerd[1744]: time="2025-06-21T04:44:29.844614840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 04:44:29.845234 containerd[1744]: time="2025-06-21T04:44:29.845216437Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 04:44:29.845288 containerd[1744]: time="2025-06-21T04:44:29.845280954Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 04:44:29.845517 containerd[1744]: time="2025-06-21T04:44:29.845504246Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 04:44:29.846157 containerd[1744]: time="2025-06-21T04:44:29.846147480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847157453Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847170936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847243212Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847389283Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847409111Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847424224Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847453556Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847644297Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 04:44:29.847756 containerd[1744]: time="2025-06-21T04:44:29.847682759Z" level=info msg="metadata content store policy set" policy=shared Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870122399Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870172980Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870198114Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870215861Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870229385Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870239850Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870252221Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870262483Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870271976Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870285804Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870296311Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870311853Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870405525Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 04:44:29.871094 containerd[1744]: time="2025-06-21T04:44:29.870422245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870436928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870448552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870459939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870470655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870481440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870495540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870507411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870517211Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870527557Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870586361Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870599526Z" level=info msg="Start snapshots syncer" Jun 21 04:44:29.872397 containerd[1744]: time="2025-06-21T04:44:29.870618841Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 04:44:29.872607 containerd[1744]: time="2025-06-21T04:44:29.870831635Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 04:44:29.872607 containerd[1744]: time="2025-06-21T04:44:29.870874089Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 04:44:29.872728 containerd[1744]: time="2025-06-21T04:44:29.870955432Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 04:44:29.872728 containerd[1744]: time="2025-06-21T04:44:29.871038010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 04:44:29.872728 containerd[1744]: time="2025-06-21T04:44:29.871055054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 04:44:29.872728 containerd[1744]: time="2025-06-21T04:44:29.871065052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 04:44:29.873169 containerd[1744]: time="2025-06-21T04:44:29.873151821Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 04:44:29.873268 containerd[1744]: time="2025-06-21T04:44:29.873258432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874737341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874754995Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874779727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874789740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874803473Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874834501Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874848344Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874856355Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874865041Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874871477Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874879605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874888714Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874902417Z" level=info msg="runtime interface created" Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874906467Z" level=info msg="created NRI interface" Jun 21 04:44:29.875150 containerd[1744]: time="2025-06-21T04:44:29.874912911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 04:44:29.875442 containerd[1744]: time="2025-06-21T04:44:29.874924497Z" level=info msg="Connect containerd service" Jun 21 04:44:29.875442 containerd[1744]: time="2025-06-21T04:44:29.874947121Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 04:44:29.878841 containerd[1744]: time="2025-06-21T04:44:29.878819704Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 04:44:29.944975 tar[1730]: linux-amd64/README.md Jun 21 04:44:29.956656 sshd_keygen[1752]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 04:44:29.965674 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 04:44:29.978717 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 04:44:29.982024 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 04:44:29.985300 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 21 04:44:30.003598 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 04:44:30.009317 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 04:44:30.012696 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 04:44:30.019515 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 21 04:44:30.030701 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 04:44:30.035507 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 04:44:30.040364 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 04:44:30.042759 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 04:44:30.383282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:44:30.386711 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:44:30.753561 containerd[1744]: time="2025-06-21T04:44:30.753498180Z" level=info msg="Start subscribing containerd event" Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753648134Z" level=info msg="Start recovering state" Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753756945Z" level=info msg="Start event monitor" Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753772255Z" level=info msg="Start cni network conf syncer for default" Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753781272Z" level=info msg="Start streaming server" Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753792377Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753799733Z" level=info msg="runtime interface starting up..." Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753805680Z" level=info msg="starting plugins..." Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753817002Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753662661Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.753914376Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 04:44:30.756107 containerd[1744]: time="2025-06-21T04:44:30.754660825Z" level=info msg="containerd successfully booted in 0.947925s" Jun 21 04:44:30.754011 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 04:44:30.756259 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 04:44:30.759108 systemd[1]: Startup finished in 2.935s (kernel) + 10.924s (initrd) + 9.269s (userspace) = 23.129s. Jun 21 04:44:30.890317 kubelet[1867]: E0621 04:44:30.890268 1867 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:44:30.891702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:44:30.891817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:44:30.892131 systemd[1]: kubelet.service: Consumed 851ms CPU time, 265.4M memory peak. Jun 21 04:44:30.980456 login[1855]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jun 21 04:44:30.982095 login[1856]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 04:44:30.991198 systemd-logind[1719]: New session 2 of user core. Jun 21 04:44:30.991995 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 04:44:30.992995 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 04:44:31.012152 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 04:44:31.013877 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 04:44:31.028612 (systemd)[1885]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 04:44:31.030261 systemd-logind[1719]: New session c1 of user core. Jun 21 04:44:31.153816 systemd[1885]: Queued start job for default target default.target. Jun 21 04:44:31.159800 systemd[1885]: Created slice app.slice - User Application Slice. Jun 21 04:44:31.159826 systemd[1885]: Reached target paths.target - Paths. Jun 21 04:44:31.159854 systemd[1885]: Reached target timers.target - Timers. Jun 21 04:44:31.161959 systemd[1885]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 04:44:31.167833 systemd[1885]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 04:44:31.167947 systemd[1885]: Reached target sockets.target - Sockets. Jun 21 04:44:31.168023 systemd[1885]: Reached target basic.target - Basic System. Jun 21 04:44:31.168094 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 04:44:31.168269 systemd[1885]: Reached target default.target - Main User Target. Jun 21 04:44:31.168339 systemd[1885]: Startup finished in 133ms. Jun 21 04:44:31.172279 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 04:44:31.336783 waagent[1853]: 2025-06-21T04:44:31.336689Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 21 04:44:31.337915 waagent[1853]: 2025-06-21T04:44:31.337839Z INFO Daemon Daemon OS: flatcar 4372.0.0 Jun 21 04:44:31.338912 waagent[1853]: 2025-06-21T04:44:31.338885Z INFO Daemon Daemon Python: 3.11.12 Jun 21 04:44:31.339967 waagent[1853]: 2025-06-21T04:44:31.339925Z INFO Daemon Daemon Run daemon Jun 21 04:44:31.341077 waagent[1853]: 2025-06-21T04:44:31.341039Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.0' Jun 21 04:44:31.342718 waagent[1853]: 2025-06-21T04:44:31.341859Z INFO Daemon Daemon Using waagent for provisioning Jun 21 04:44:31.344106 waagent[1853]: 2025-06-21T04:44:31.344078Z INFO Daemon Daemon Activate resource disk Jun 21 04:44:31.345176 waagent[1853]: 2025-06-21T04:44:31.345091Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 21 04:44:31.347760 waagent[1853]: 2025-06-21T04:44:31.347727Z INFO Daemon Daemon Found device: None Jun 21 04:44:31.348677 waagent[1853]: 2025-06-21T04:44:31.348613Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 21 04:44:31.350328 waagent[1853]: 2025-06-21T04:44:31.350262Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 21 04:44:31.352601 waagent[1853]: 2025-06-21T04:44:31.352561Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 21 04:44:31.353812 waagent[1853]: 2025-06-21T04:44:31.353785Z INFO Daemon Daemon Running default provisioning handler Jun 21 04:44:31.359193 waagent[1853]: 2025-06-21T04:44:31.358934Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 21 04:44:31.361880 waagent[1853]: 2025-06-21T04:44:31.361848Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 21 04:44:31.365813 waagent[1853]: 2025-06-21T04:44:31.361977Z INFO Daemon Daemon cloud-init is enabled: False Jun 21 04:44:31.365813 waagent[1853]: 2025-06-21T04:44:31.362178Z INFO Daemon Daemon Copying ovf-env.xml Jun 21 04:44:31.499547 waagent[1853]: 2025-06-21T04:44:31.497873Z INFO Daemon Daemon Successfully mounted dvd Jun 21 04:44:31.507712 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 21 04:44:31.509369 waagent[1853]: 2025-06-21T04:44:31.509326Z INFO Daemon Daemon Detect protocol endpoint Jun 21 04:44:31.510403 waagent[1853]: 2025-06-21T04:44:31.509889Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 21 04:44:31.511552 waagent[1853]: 2025-06-21T04:44:31.511490Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 21 04:44:31.512114 waagent[1853]: 2025-06-21T04:44:31.512095Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 21 04:44:31.513941 waagent[1853]: 2025-06-21T04:44:31.513917Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 21 04:44:31.514983 waagent[1853]: 2025-06-21T04:44:31.514921Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 21 04:44:31.524588 waagent[1853]: 2025-06-21T04:44:31.524557Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 21 04:44:31.525204 waagent[1853]: 2025-06-21T04:44:31.524812Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 21 04:44:31.525204 waagent[1853]: 2025-06-21T04:44:31.524937Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 21 04:44:31.611070 waagent[1853]: 2025-06-21T04:44:31.610993Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 21 04:44:31.611294 waagent[1853]: 2025-06-21T04:44:31.611265Z INFO Daemon Daemon Forcing an update of the goal state. Jun 21 04:44:31.619623 waagent[1853]: 2025-06-21T04:44:31.619591Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 21 04:44:31.637527 waagent[1853]: 2025-06-21T04:44:31.637493Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 21 04:44:31.639047 waagent[1853]: 2025-06-21T04:44:31.639013Z INFO Daemon Jun 21 04:44:31.639749 waagent[1853]: 2025-06-21T04:44:31.639724Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: aeb4d4c1-9bf2-4273-ac76-4c9c9ed1c2a6 eTag: 3665145048045009117 source: Fabric] Jun 21 04:44:31.642060 waagent[1853]: 2025-06-21T04:44:31.642033Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 21 04:44:31.643485 waagent[1853]: 2025-06-21T04:44:31.643460Z INFO Daemon Jun 21 04:44:31.644101 waagent[1853]: 2025-06-21T04:44:31.644044Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 21 04:44:31.650845 waagent[1853]: 2025-06-21T04:44:31.650815Z INFO Daemon Daemon Downloading artifacts profile blob Jun 21 04:44:31.716701 waagent[1853]: 2025-06-21T04:44:31.716657Z INFO Daemon Downloaded certificate {'thumbprint': 'B70C9DE074B0AB08B0E1EB9A2848F0C65D52F716', 'hasPrivateKey': True} Jun 21 04:44:31.717906 waagent[1853]: 2025-06-21T04:44:31.717078Z INFO Daemon Fetch goal state completed Jun 21 04:44:31.722359 waagent[1853]: 2025-06-21T04:44:31.722330Z INFO Daemon Daemon Starting provisioning Jun 21 04:44:31.724092 waagent[1853]: 2025-06-21T04:44:31.722483Z INFO Daemon Daemon Handle ovf-env.xml. Jun 21 04:44:31.724092 waagent[1853]: 2025-06-21T04:44:31.722702Z INFO Daemon Daemon Set hostname [ci-4372.0.0-a-c1262e9e80] Jun 21 04:44:31.738258 waagent[1853]: 2025-06-21T04:44:31.738216Z INFO Daemon Daemon Publish hostname [ci-4372.0.0-a-c1262e9e80] Jun 21 04:44:31.739650 waagent[1853]: 2025-06-21T04:44:31.738475Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 21 04:44:31.739650 waagent[1853]: 2025-06-21T04:44:31.738928Z INFO Daemon Daemon Primary interface is [eth0] Jun 21 04:44:31.745560 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:31.745568 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:44:31.745589 systemd-networkd[1362]: eth0: DHCP lease lost Jun 21 04:44:31.746403 waagent[1853]: 2025-06-21T04:44:31.746359Z INFO Daemon Daemon Create user account if not exists Jun 21 04:44:31.747033 waagent[1853]: 2025-06-21T04:44:31.747003Z INFO Daemon Daemon User core already exists, skip useradd Jun 21 04:44:31.747114 waagent[1853]: 2025-06-21T04:44:31.747093Z INFO Daemon Daemon Configure sudoer Jun 21 04:44:31.754951 waagent[1853]: 2025-06-21T04:44:31.754910Z INFO Daemon Daemon Configure sshd Jun 21 04:44:31.758521 waagent[1853]: 2025-06-21T04:44:31.758484Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 21 04:44:31.760184 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:44:31.762428 waagent[1853]: 2025-06-21T04:44:31.762383Z INFO Daemon Daemon Deploy ssh public key. Jun 21 04:44:31.980805 login[1855]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 04:44:31.985631 systemd-logind[1719]: New session 1 of user core. Jun 21 04:44:31.990249 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 04:44:32.844426 waagent[1853]: 2025-06-21T04:44:32.844382Z INFO Daemon Daemon Provisioning complete Jun 21 04:44:32.859035 waagent[1853]: 2025-06-21T04:44:32.859003Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 21 04:44:32.863637 waagent[1853]: 2025-06-21T04:44:32.859191Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 21 04:44:32.863637 waagent[1853]: 2025-06-21T04:44:32.859597Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 21 04:44:32.951726 waagent[1934]: 2025-06-21T04:44:32.951666Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 21 04:44:32.951939 waagent[1934]: 2025-06-21T04:44:32.951755Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.0 Jun 21 04:44:32.951939 waagent[1934]: 2025-06-21T04:44:32.951797Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 21 04:44:32.951939 waagent[1934]: 2025-06-21T04:44:32.951837Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 21 04:44:32.971999 waagent[1934]: 2025-06-21T04:44:32.971960Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 21 04:44:32.972104 waagent[1934]: 2025-06-21T04:44:32.972084Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:44:32.972159 waagent[1934]: 2025-06-21T04:44:32.972127Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:44:32.978182 waagent[1934]: 2025-06-21T04:44:32.978115Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 21 04:44:32.984174 waagent[1934]: 2025-06-21T04:44:32.984144Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 21 04:44:32.984466 waagent[1934]: 2025-06-21T04:44:32.984442Z INFO ExtHandler Jun 21 04:44:32.984501 waagent[1934]: 2025-06-21T04:44:32.984490Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8a574047-9086-46f5-a8e7-1a9a75bb1719 eTag: 3665145048045009117 source: Fabric] Jun 21 04:44:32.984676 waagent[1934]: 2025-06-21T04:44:32.984655Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 21 04:44:32.984968 waagent[1934]: 2025-06-21T04:44:32.984947Z INFO ExtHandler Jun 21 04:44:32.985008 waagent[1934]: 2025-06-21T04:44:32.984983Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 21 04:44:32.988248 waagent[1934]: 2025-06-21T04:44:32.988220Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 21 04:44:33.058677 waagent[1934]: 2025-06-21T04:44:33.058634Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B70C9DE074B0AB08B0E1EB9A2848F0C65D52F716', 'hasPrivateKey': True} Jun 21 04:44:33.058952 waagent[1934]: 2025-06-21T04:44:33.058928Z INFO ExtHandler Fetch goal state completed Jun 21 04:44:33.071738 waagent[1934]: 2025-06-21T04:44:33.071697Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 21 04:44:33.075813 waagent[1934]: 2025-06-21T04:44:33.075767Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1934 Jun 21 04:44:33.075902 waagent[1934]: 2025-06-21T04:44:33.075873Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 21 04:44:33.076112 waagent[1934]: 2025-06-21T04:44:33.076092Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 21 04:44:33.076974 waagent[1934]: 2025-06-21T04:44:33.076940Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 21 04:44:33.077262 waagent[1934]: 2025-06-21T04:44:33.077238Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 21 04:44:33.077361 waagent[1934]: 2025-06-21T04:44:33.077343Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 21 04:44:33.077711 waagent[1934]: 2025-06-21T04:44:33.077689Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 21 04:44:33.107321 waagent[1934]: 2025-06-21T04:44:33.107270Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 21 04:44:33.107410 waagent[1934]: 2025-06-21T04:44:33.107389Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 21 04:44:33.112017 waagent[1934]: 2025-06-21T04:44:33.111872Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 21 04:44:33.116337 systemd[1]: Reload requested from client PID 1949 ('systemctl') (unit waagent.service)... Jun 21 04:44:33.116347 systemd[1]: Reloading... Jun 21 04:44:33.180166 zram_generator::config[1983]: No configuration found. Jun 21 04:44:33.267058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:44:33.354151 systemd[1]: Reloading finished in 237 ms. Jun 21 04:44:33.367418 waagent[1934]: 2025-06-21T04:44:33.365245Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 21 04:44:33.367418 waagent[1934]: 2025-06-21T04:44:33.365327Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 21 04:44:33.544858 waagent[1934]: 2025-06-21T04:44:33.544815Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 21 04:44:33.545072 waagent[1934]: 2025-06-21T04:44:33.545049Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 21 04:44:33.545659 waagent[1934]: 2025-06-21T04:44:33.545629Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 21 04:44:33.546156 waagent[1934]: 2025-06-21T04:44:33.546101Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 21 04:44:33.546229 waagent[1934]: 2025-06-21T04:44:33.546182Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:44:33.546373 waagent[1934]: 2025-06-21T04:44:33.546340Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:44:33.546442 waagent[1934]: 2025-06-21T04:44:33.546424Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:44:33.546604 waagent[1934]: 2025-06-21T04:44:33.546582Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 21 04:44:33.546705 waagent[1934]: 2025-06-21T04:44:33.546666Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 21 04:44:33.546773 waagent[1934]: 2025-06-21T04:44:33.546746Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:44:33.546810 waagent[1934]: 2025-06-21T04:44:33.546787Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 21 04:44:33.546934 waagent[1934]: 2025-06-21T04:44:33.546912Z INFO EnvHandler ExtHandler Configure routes Jun 21 04:44:33.546982 waagent[1934]: 2025-06-21T04:44:33.546962Z INFO EnvHandler ExtHandler Gateway:None Jun 21 04:44:33.547201 waagent[1934]: 2025-06-21T04:44:33.547179Z INFO EnvHandler ExtHandler Routes:None Jun 21 04:44:33.547413 waagent[1934]: 2025-06-21T04:44:33.547373Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 21 04:44:33.547413 waagent[1934]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 21 04:44:33.547413 waagent[1934]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 21 04:44:33.547413 waagent[1934]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 21 04:44:33.547413 waagent[1934]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:44:33.547413 waagent[1934]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:44:33.547413 waagent[1934]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:44:33.547830 waagent[1934]: 2025-06-21T04:44:33.547797Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 21 04:44:33.547902 waagent[1934]: 2025-06-21T04:44:33.547881Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 21 04:44:33.548593 waagent[1934]: 2025-06-21T04:44:33.548571Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 21 04:44:33.553746 waagent[1934]: 2025-06-21T04:44:33.552882Z INFO ExtHandler ExtHandler Jun 21 04:44:33.553746 waagent[1934]: 2025-06-21T04:44:33.552919Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7c18872a-703f-4ade-a33b-575fde10f66e correlation d06bb95d-3d72-45b8-a6ca-aa758b76982a created: 2025-06-21T04:43:32.893616Z] Jun 21 04:44:33.553746 waagent[1934]: 2025-06-21T04:44:33.553086Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 21 04:44:33.553746 waagent[1934]: 2025-06-21T04:44:33.553396Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 21 04:44:33.581483 waagent[1934]: 2025-06-21T04:44:33.581448Z INFO MonitorHandler ExtHandler Network interfaces: Jun 21 04:44:33.581483 waagent[1934]: Executing ['ip', '-a', '-o', 'link']: Jun 21 04:44:33.581483 waagent[1934]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 21 04:44:33.581483 waagent[1934]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:4a:a5:0e brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 21 04:44:33.581483 waagent[1934]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:4a:a5:0e brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jun 21 04:44:33.581483 waagent[1934]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 21 04:44:33.581483 waagent[1934]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 21 04:44:33.581483 waagent[1934]: 2: eth0 inet 10.200.8.43/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 21 04:44:33.581483 waagent[1934]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 21 04:44:33.581483 waagent[1934]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 21 04:44:33.581483 waagent[1934]: 2: eth0 inet6 fe80::7eed:8dff:fe4a:a50e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 21 04:44:33.581483 waagent[1934]: 3: enP30832s1 inet6 fe80::7eed:8dff:fe4a:a50e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 21 04:44:33.598329 waagent[1934]: 2025-06-21T04:44:33.598291Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 21 04:44:33.598329 waagent[1934]: Try `iptables -h' or 'iptables --help' for more information.) Jun 21 04:44:33.598623 waagent[1934]: 2025-06-21T04:44:33.598599Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 29A2D0E4-23F3-4633-9E2C-FC4AA269150E;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 21 04:44:33.626215 waagent[1934]: 2025-06-21T04:44:33.626124Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 21 04:44:33.626215 waagent[1934]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:33.626215 waagent[1934]: pkts bytes target prot opt in out source destination Jun 21 04:44:33.626215 waagent[1934]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:33.626215 waagent[1934]: pkts bytes target prot opt in out source destination Jun 21 04:44:33.626215 waagent[1934]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:33.626215 waagent[1934]: pkts bytes target prot opt in out source destination Jun 21 04:44:33.626215 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 21 04:44:33.626215 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 21 04:44:33.626215 waagent[1934]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 21 04:44:33.628439 waagent[1934]: 2025-06-21T04:44:33.628400Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 21 04:44:33.628439 waagent[1934]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:33.628439 waagent[1934]: pkts bytes target prot opt in out source destination Jun 21 04:44:33.628439 waagent[1934]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:33.628439 waagent[1934]: pkts bytes target prot opt in out source destination Jun 21 04:44:33.628439 waagent[1934]: Chain OUTPUT (policy ACCEPT 2 packets, 289 bytes) Jun 21 04:44:33.628439 waagent[1934]: pkts bytes target prot opt in out source destination Jun 21 04:44:33.628439 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 21 04:44:33.628439 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 21 04:44:33.628439 waagent[1934]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 21 04:44:40.894216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 04:44:40.895889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:44:41.385437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:44:41.394389 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:44:41.432223 kubelet[2085]: E0621 04:44:41.432189 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:44:41.434759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:44:41.434900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:44:41.435244 systemd[1]: kubelet.service: Consumed 128ms CPU time, 108.8M memory peak. Jun 21 04:44:49.681908 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 04:44:49.682852 systemd[1]: Started sshd@0-10.200.8.43:22-10.200.16.10:59606.service - OpenSSH per-connection server daemon (10.200.16.10:59606). Jun 21 04:44:50.411752 sshd[2093]: Accepted publickey for core from 10.200.16.10 port 59606 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:50.413017 sshd-session[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:50.417398 systemd-logind[1719]: New session 3 of user core. Jun 21 04:44:50.423284 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 04:44:50.967082 systemd[1]: Started sshd@1-10.200.8.43:22-10.200.16.10:59608.service - OpenSSH per-connection server daemon (10.200.16.10:59608). Jun 21 04:44:51.484818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 04:44:51.486109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:44:51.592646 sshd[2098]: Accepted publickey for core from 10.200.16.10 port 59608 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:51.593669 sshd-session[2098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:51.597779 systemd-logind[1719]: New session 4 of user core. Jun 21 04:44:51.601258 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 04:44:51.885950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:44:51.894311 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:44:51.925417 kubelet[2109]: E0621 04:44:51.925384 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:44:51.926819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:44:51.926937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:44:51.927236 systemd[1]: kubelet.service: Consumed 114ms CPU time, 108.5M memory peak. Jun 21 04:44:52.035712 sshd[2103]: Connection closed by 10.200.16.10 port 59608 Jun 21 04:44:52.036199 sshd-session[2098]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:52.038902 systemd[1]: sshd@1-10.200.8.43:22-10.200.16.10:59608.service: Deactivated successfully. Jun 21 04:44:52.040063 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 04:44:52.040667 systemd-logind[1719]: Session 4 logged out. Waiting for processes to exit. Jun 21 04:44:52.041631 systemd-logind[1719]: Removed session 4. Jun 21 04:44:52.150955 systemd[1]: Started sshd@2-10.200.8.43:22-10.200.16.10:59612.service - OpenSSH per-connection server daemon (10.200.16.10:59612). Jun 21 04:44:52.778462 sshd[2122]: Accepted publickey for core from 10.200.16.10 port 59612 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:52.779647 sshd-session[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:52.784008 systemd-logind[1719]: New session 5 of user core. Jun 21 04:44:52.790262 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 04:44:52.803499 chronyd[1756]: Selected source PHC0 Jun 21 04:44:53.215270 sshd[2124]: Connection closed by 10.200.16.10 port 59612 Jun 21 04:44:53.215786 sshd-session[2122]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:53.218899 systemd[1]: sshd@2-10.200.8.43:22-10.200.16.10:59612.service: Deactivated successfully. Jun 21 04:44:53.220574 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 04:44:53.221322 systemd-logind[1719]: Session 5 logged out. Waiting for processes to exit. Jun 21 04:44:53.222278 systemd-logind[1719]: Removed session 5. Jun 21 04:44:53.335066 systemd[1]: Started sshd@3-10.200.8.43:22-10.200.16.10:59626.service - OpenSSH per-connection server daemon (10.200.16.10:59626). Jun 21 04:44:53.961045 sshd[2130]: Accepted publickey for core from 10.200.16.10 port 59626 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:53.962249 sshd-session[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:53.966260 systemd-logind[1719]: New session 6 of user core. Jun 21 04:44:53.975265 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 04:44:54.404659 sshd[2132]: Connection closed by 10.200.16.10 port 59626 Jun 21 04:44:54.405185 sshd-session[2130]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:54.407850 systemd[1]: sshd@3-10.200.8.43:22-10.200.16.10:59626.service: Deactivated successfully. Jun 21 04:44:54.409265 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 04:44:54.410654 systemd-logind[1719]: Session 6 logged out. Waiting for processes to exit. Jun 21 04:44:54.411350 systemd-logind[1719]: Removed session 6. Jun 21 04:44:54.515238 systemd[1]: Started sshd@4-10.200.8.43:22-10.200.16.10:59632.service - OpenSSH per-connection server daemon (10.200.16.10:59632). Jun 21 04:44:55.148928 sshd[2138]: Accepted publickey for core from 10.200.16.10 port 59632 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:55.150067 sshd-session[2138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:55.154007 systemd-logind[1719]: New session 7 of user core. Jun 21 04:44:55.161271 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 04:44:55.570282 sudo[2141]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 04:44:55.570482 sudo[2141]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:44:55.594044 sudo[2141]: pam_unix(sudo:session): session closed for user root Jun 21 04:44:55.693481 sshd[2140]: Connection closed by 10.200.16.10 port 59632 Jun 21 04:44:55.694105 sshd-session[2138]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:55.697097 systemd[1]: sshd@4-10.200.8.43:22-10.200.16.10:59632.service: Deactivated successfully. Jun 21 04:44:55.698420 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 04:44:55.699691 systemd-logind[1719]: Session 7 logged out. Waiting for processes to exit. Jun 21 04:44:55.700607 systemd-logind[1719]: Removed session 7. Jun 21 04:44:55.802959 systemd[1]: Started sshd@5-10.200.8.43:22-10.200.16.10:59646.service - OpenSSH per-connection server daemon (10.200.16.10:59646). Jun 21 04:44:56.432284 sshd[2147]: Accepted publickey for core from 10.200.16.10 port 59646 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:56.433452 sshd-session[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:56.437753 systemd-logind[1719]: New session 8 of user core. Jun 21 04:44:56.447266 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 04:44:56.775034 sudo[2151]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 04:44:56.775390 sudo[2151]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:44:56.780384 sudo[2151]: pam_unix(sudo:session): session closed for user root Jun 21 04:44:56.784088 sudo[2150]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 04:44:56.784314 sudo[2150]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:44:56.791074 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:44:56.822091 augenrules[2173]: No rules Jun 21 04:44:56.822873 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:44:56.823045 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:44:56.823937 sudo[2150]: pam_unix(sudo:session): session closed for user root Jun 21 04:44:56.925467 sshd[2149]: Connection closed by 10.200.16.10 port 59646 Jun 21 04:44:56.925887 sshd-session[2147]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:56.928312 systemd[1]: sshd@5-10.200.8.43:22-10.200.16.10:59646.service: Deactivated successfully. Jun 21 04:44:56.929915 systemd-logind[1719]: Session 8 logged out. Waiting for processes to exit. Jun 21 04:44:56.930017 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 04:44:56.931289 systemd-logind[1719]: Removed session 8. Jun 21 04:44:57.039962 systemd[1]: Started sshd@6-10.200.8.43:22-10.200.16.10:59662.service - OpenSSH per-connection server daemon (10.200.16.10:59662). Jun 21 04:44:57.698714 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 59662 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:57.699842 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:57.703805 systemd-logind[1719]: New session 9 of user core. Jun 21 04:44:57.711273 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 04:44:58.049428 sudo[2185]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 04:44:58.049644 sudo[2185]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:44:59.208621 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 04:44:59.220412 (dockerd)[2204]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 04:45:00.045676 dockerd[2204]: time="2025-06-21T04:45:00.045630075Z" level=info msg="Starting up" Jun 21 04:45:00.046644 dockerd[2204]: time="2025-06-21T04:45:00.046604238Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 04:45:00.129686 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3607218971-merged.mount: Deactivated successfully. Jun 21 04:45:00.390811 dockerd[2204]: time="2025-06-21T04:45:00.390726328Z" level=info msg="Loading containers: start." Jun 21 04:45:00.415153 kernel: Initializing XFRM netlink socket Jun 21 04:45:00.927990 systemd-networkd[1362]: docker0: Link UP Jun 21 04:45:00.940294 dockerd[2204]: time="2025-06-21T04:45:00.940268271Z" level=info msg="Loading containers: done." Jun 21 04:45:00.965534 dockerd[2204]: time="2025-06-21T04:45:00.965506345Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 04:45:00.965633 dockerd[2204]: time="2025-06-21T04:45:00.965567140Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 04:45:00.965665 dockerd[2204]: time="2025-06-21T04:45:00.965644468Z" level=info msg="Initializing buildkit" Jun 21 04:45:01.009529 dockerd[2204]: time="2025-06-21T04:45:01.009494217Z" level=info msg="Completed buildkit initialization" Jun 21 04:45:01.014929 dockerd[2204]: time="2025-06-21T04:45:01.014883627Z" level=info msg="Daemon has completed initialization" Jun 21 04:45:01.015214 dockerd[2204]: time="2025-06-21T04:45:01.014945935Z" level=info msg="API listen on /run/docker.sock" Jun 21 04:45:01.015061 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 04:45:02.078579 containerd[1744]: time="2025-06-21T04:45:02.078520055Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 21 04:45:02.143778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 21 04:45:02.145309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:02.580909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:02.590363 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:45:02.620196 kubelet[2411]: E0621 04:45:02.620163 2411 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:45:02.621584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:45:02.621677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:45:02.621944 systemd[1]: kubelet.service: Consumed 117ms CPU time, 108.5M memory peak. Jun 21 04:45:02.988244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096647775.mount: Deactivated successfully. Jun 21 04:45:03.957167 containerd[1744]: time="2025-06-21T04:45:03.957117728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:03.961785 containerd[1744]: time="2025-06-21T04:45:03.961750758Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jun 21 04:45:03.962567 containerd[1744]: time="2025-06-21T04:45:03.962530382Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:03.965972 containerd[1744]: time="2025-06-21T04:45:03.965936988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:03.966758 containerd[1744]: time="2025-06-21T04:45:03.966406156Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.887846274s" Jun 21 04:45:03.966758 containerd[1744]: time="2025-06-21T04:45:03.966437150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 21 04:45:03.966974 containerd[1744]: time="2025-06-21T04:45:03.966957330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 21 04:45:05.092598 containerd[1744]: time="2025-06-21T04:45:05.092560409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:05.095420 containerd[1744]: time="2025-06-21T04:45:05.095386593Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jun 21 04:45:05.097869 containerd[1744]: time="2025-06-21T04:45:05.097837315Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:05.101396 containerd[1744]: time="2025-06-21T04:45:05.101359769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:05.101978 containerd[1744]: time="2025-06-21T04:45:05.101875459Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.134894543s" Jun 21 04:45:05.101978 containerd[1744]: time="2025-06-21T04:45:05.101901136Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 21 04:45:05.102438 containerd[1744]: time="2025-06-21T04:45:05.102420377Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 21 04:45:06.139341 containerd[1744]: time="2025-06-21T04:45:06.139306102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:06.141472 containerd[1744]: time="2025-06-21T04:45:06.141442608Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jun 21 04:45:06.144621 containerd[1744]: time="2025-06-21T04:45:06.144586562Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:06.147997 containerd[1744]: time="2025-06-21T04:45:06.147961498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:06.148718 containerd[1744]: time="2025-06-21T04:45:06.148596404Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.046151027s" Jun 21 04:45:06.148718 containerd[1744]: time="2025-06-21T04:45:06.148626413Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 21 04:45:06.149278 containerd[1744]: time="2025-06-21T04:45:06.149202244Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 21 04:45:07.038845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134017736.mount: Deactivated successfully. Jun 21 04:45:07.370348 containerd[1744]: time="2025-06-21T04:45:07.370266573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:07.373516 containerd[1744]: time="2025-06-21T04:45:07.373483027Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jun 21 04:45:07.375787 containerd[1744]: time="2025-06-21T04:45:07.375749613Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:07.378377 containerd[1744]: time="2025-06-21T04:45:07.378341536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:07.378726 containerd[1744]: time="2025-06-21T04:45:07.378586453Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.229361347s" Jun 21 04:45:07.378726 containerd[1744]: time="2025-06-21T04:45:07.378612471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 21 04:45:07.379008 containerd[1744]: time="2025-06-21T04:45:07.378978600Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 04:45:07.939962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674666226.mount: Deactivated successfully. Jun 21 04:45:08.709947 containerd[1744]: time="2025-06-21T04:45:08.709911764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:08.712082 containerd[1744]: time="2025-06-21T04:45:08.712048124Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 21 04:45:08.714555 containerd[1744]: time="2025-06-21T04:45:08.714519359Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:08.719075 containerd[1744]: time="2025-06-21T04:45:08.719037456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:08.719710 containerd[1744]: time="2025-06-21T04:45:08.719594113Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.34058888s" Jun 21 04:45:08.719710 containerd[1744]: time="2025-06-21T04:45:08.719623029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 04:45:08.720163 containerd[1744]: time="2025-06-21T04:45:08.720046292Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 04:45:09.211853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount749675549.mount: Deactivated successfully. Jun 21 04:45:09.231401 containerd[1744]: time="2025-06-21T04:45:09.231372405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:45:09.233496 containerd[1744]: time="2025-06-21T04:45:09.233466950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 21 04:45:09.236412 containerd[1744]: time="2025-06-21T04:45:09.236380557Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:45:09.239686 containerd[1744]: time="2025-06-21T04:45:09.239651858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:45:09.240148 containerd[1744]: time="2025-06-21T04:45:09.240012707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 519.945528ms" Jun 21 04:45:09.240148 containerd[1744]: time="2025-06-21T04:45:09.240035677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 04:45:09.240511 containerd[1744]: time="2025-06-21T04:45:09.240497551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 21 04:45:09.846348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1186583296.mount: Deactivated successfully. Jun 21 04:45:11.477902 containerd[1744]: time="2025-06-21T04:45:11.477859632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:11.479980 containerd[1744]: time="2025-06-21T04:45:11.479950733Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jun 21 04:45:11.482825 containerd[1744]: time="2025-06-21T04:45:11.482788109Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:11.486280 containerd[1744]: time="2025-06-21T04:45:11.486241555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:11.487164 containerd[1744]: time="2025-06-21T04:45:11.486901313Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.246375685s" Jun 21 04:45:11.487164 containerd[1744]: time="2025-06-21T04:45:11.486929441Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 21 04:45:12.643895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 21 04:45:12.645028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:13.049595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:13.059327 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:45:13.094111 kubelet[2627]: E0621 04:45:13.094082 2627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:45:13.095869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:45:13.095976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:45:13.096378 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.3M memory peak. Jun 21 04:45:13.521149 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 21 04:45:13.698251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:13.698407 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.3M memory peak. Jun 21 04:45:13.700115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:13.721548 systemd[1]: Reload requested from client PID 2642 ('systemctl') (unit session-9.scope)... Jun 21 04:45:13.721562 systemd[1]: Reloading... Jun 21 04:45:13.807162 zram_generator::config[2687]: No configuration found. Jun 21 04:45:13.948824 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:45:14.037374 systemd[1]: Reloading finished in 315 ms. Jun 21 04:45:14.067238 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 04:45:14.067307 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 04:45:14.067544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:14.067583 systemd[1]: kubelet.service: Consumed 76ms CPU time, 84.6M memory peak. Jun 21 04:45:14.069210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:14.605070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:14.609092 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 04:45:14.641110 update_engine[1720]: I20250621 04:45:14.640432 1720 update_attempter.cc:509] Updating boot flags... Jun 21 04:45:14.641340 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:45:14.641340 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 04:45:14.641340 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:45:14.641523 kubelet[2755]: I0621 04:45:14.641401 2755 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 04:45:14.985375 kubelet[2755]: I0621 04:45:14.985353 2755 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 04:45:14.985467 kubelet[2755]: I0621 04:45:14.985461 2755 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 04:45:14.985884 kubelet[2755]: I0621 04:45:14.985875 2755 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 04:45:15.016927 kubelet[2755]: E0621 04:45:15.016907 2755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:15.019088 kubelet[2755]: I0621 04:45:15.019066 2755 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:45:15.025474 kubelet[2755]: I0621 04:45:15.025460 2755 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 04:45:15.027356 kubelet[2755]: I0621 04:45:15.027334 2755 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 04:45:15.027511 kubelet[2755]: I0621 04:45:15.027487 2755 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 04:45:15.027634 kubelet[2755]: I0621 04:45:15.027511 2755 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-a-c1262e9e80","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 04:45:15.028267 kubelet[2755]: I0621 04:45:15.028249 2755 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 04:45:15.028267 kubelet[2755]: I0621 04:45:15.028265 2755 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 04:45:15.028362 kubelet[2755]: I0621 04:45:15.028351 2755 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:45:15.032209 kubelet[2755]: I0621 04:45:15.032195 2755 kubelet.go:446] "Attempting to sync node with API server" Jun 21 04:45:15.032266 kubelet[2755]: I0621 04:45:15.032218 2755 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 04:45:15.032266 kubelet[2755]: I0621 04:45:15.032240 2755 kubelet.go:352] "Adding apiserver pod source" Jun 21 04:45:15.032266 kubelet[2755]: I0621 04:45:15.032250 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 04:45:15.037919 kubelet[2755]: W0621 04:45:15.037746 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jun 21 04:45:15.038758 kubelet[2755]: E0621 04:45:15.038013 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:15.038758 kubelet[2755]: I0621 04:45:15.038092 2755 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 04:45:15.038927 kubelet[2755]: I0621 04:45:15.038908 2755 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 04:45:15.038979 kubelet[2755]: W0621 04:45:15.038972 2755 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 04:45:15.042108 kubelet[2755]: W0621 04:45:15.042068 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-a-c1262e9e80&limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jun 21 04:45:15.042195 kubelet[2755]: E0621 04:45:15.042110 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-a-c1262e9e80&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:15.044157 kubelet[2755]: I0621 04:45:15.043370 2755 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 04:45:15.044157 kubelet[2755]: I0621 04:45:15.043406 2755 server.go:1287] "Started kubelet" Jun 21 04:45:15.048738 kubelet[2755]: I0621 04:45:15.048712 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 04:45:15.049622 kubelet[2755]: E0621 04:45:15.048358 2755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.0-a-c1262e9e80.184af54cb830d681 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.0-a-c1262e9e80,UID:ci-4372.0.0-a-c1262e9e80,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.0-a-c1262e9e80,},FirstTimestamp:2025-06-21 04:45:15.043387009 +0000 UTC m=+0.431039165,LastTimestamp:2025-06-21 04:45:15.043387009 +0000 UTC m=+0.431039165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.0-a-c1262e9e80,}" Jun 21 04:45:15.052453 kubelet[2755]: I0621 04:45:15.052248 2755 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 04:45:15.052977 kubelet[2755]: I0621 04:45:15.052958 2755 server.go:479] "Adding debug handlers to kubelet server" Jun 21 04:45:15.053523 kubelet[2755]: I0621 04:45:15.053511 2755 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 04:45:15.053765 kubelet[2755]: E0621 04:45:15.053751 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" Jun 21 04:45:15.053838 kubelet[2755]: I0621 04:45:15.053806 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 04:45:15.053990 kubelet[2755]: I0621 04:45:15.053960 2755 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 04:45:15.054115 kubelet[2755]: I0621 04:45:15.054102 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 04:45:15.055173 kubelet[2755]: E0621 04:45:15.054420 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-c1262e9e80?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="200ms" Jun 21 04:45:15.055173 kubelet[2755]: I0621 04:45:15.054658 2755 reconciler.go:26] "Reconciler: start to sync state" Jun 21 04:45:15.055173 kubelet[2755]: I0621 04:45:15.054684 2755 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 04:45:15.055173 kubelet[2755]: W0621 04:45:15.054943 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jun 21 04:45:15.055173 kubelet[2755]: E0621 04:45:15.054985 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:15.055846 kubelet[2755]: I0621 04:45:15.055660 2755 factory.go:221] Registration of the systemd container factory successfully Jun 21 04:45:15.055846 kubelet[2755]: I0621 04:45:15.055739 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 04:45:15.056527 kubelet[2755]: E0621 04:45:15.056511 2755 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 04:45:15.056918 kubelet[2755]: I0621 04:45:15.056908 2755 factory.go:221] Registration of the containerd container factory successfully Jun 21 04:45:15.073004 kubelet[2755]: I0621 04:45:15.072990 2755 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 04:45:15.073004 kubelet[2755]: I0621 04:45:15.073003 2755 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 04:45:15.073094 kubelet[2755]: I0621 04:45:15.073016 2755 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:45:15.079814 kubelet[2755]: I0621 04:45:15.079804 2755 policy_none.go:49] "None policy: Start" Jun 21 04:45:15.079866 kubelet[2755]: I0621 04:45:15.079862 2755 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 04:45:15.079891 kubelet[2755]: I0621 04:45:15.079888 2755 state_mem.go:35] "Initializing new in-memory state store" Jun 21 04:45:15.082525 kubelet[2755]: I0621 04:45:15.082494 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 04:45:15.083513 kubelet[2755]: I0621 04:45:15.083417 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 04:45:15.083513 kubelet[2755]: I0621 04:45:15.083435 2755 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 04:45:15.083513 kubelet[2755]: I0621 04:45:15.083452 2755 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 04:45:15.083513 kubelet[2755]: I0621 04:45:15.083458 2755 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 04:45:15.083513 kubelet[2755]: E0621 04:45:15.083492 2755 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 04:45:15.085789 kubelet[2755]: W0621 04:45:15.085635 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jun 21 04:45:15.085789 kubelet[2755]: E0621 04:45:15.085674 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:15.091726 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 04:45:15.103659 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 04:45:15.105840 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 04:45:15.115571 kubelet[2755]: I0621 04:45:15.115556 2755 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 04:45:15.115692 kubelet[2755]: I0621 04:45:15.115683 2755 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 04:45:15.115728 kubelet[2755]: I0621 04:45:15.115694 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 04:45:15.116050 kubelet[2755]: I0621 04:45:15.115990 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 04:45:15.116606 kubelet[2755]: E0621 04:45:15.116587 2755 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 04:45:15.116654 kubelet[2755]: E0621 04:45:15.116632 2755 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.0-a-c1262e9e80\" not found" Jun 21 04:45:15.190887 systemd[1]: Created slice kubepods-burstable-pod74c8c97e9a3948a22a34280cc11d2406.slice - libcontainer container kubepods-burstable-pod74c8c97e9a3948a22a34280cc11d2406.slice. Jun 21 04:45:15.196740 kubelet[2755]: E0621 04:45:15.196709 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.198925 systemd[1]: Created slice kubepods-burstable-podd3af57be931ce4c1c4e797371575addb.slice - libcontainer container kubepods-burstable-podd3af57be931ce4c1c4e797371575addb.slice. Jun 21 04:45:15.200473 kubelet[2755]: E0621 04:45:15.200444 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.209441 systemd[1]: Created slice kubepods-burstable-pod9825c18ab9809cf781f659f4aa04cc1e.slice - libcontainer container kubepods-burstable-pod9825c18ab9809cf781f659f4aa04cc1e.slice. Jun 21 04:45:15.210824 kubelet[2755]: E0621 04:45:15.210802 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.216855 kubelet[2755]: I0621 04:45:15.216820 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.217109 kubelet[2755]: E0621 04:45:15.217076 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255576 kubelet[2755]: I0621 04:45:15.255388 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255576 kubelet[2755]: I0621 04:45:15.255414 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255576 kubelet[2755]: E0621 04:45:15.255421 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-c1262e9e80?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="400ms" Jun 21 04:45:15.255576 kubelet[2755]: I0621 04:45:15.255432 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255576 kubelet[2755]: I0621 04:45:15.255449 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9825c18ab9809cf781f659f4aa04cc1e-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-a-c1262e9e80\" (UID: \"9825c18ab9809cf781f659f4aa04cc1e\") " pod="kube-system/kube-scheduler-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255707 kubelet[2755]: I0621 04:45:15.255467 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74c8c97e9a3948a22a34280cc11d2406-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-a-c1262e9e80\" (UID: \"74c8c97e9a3948a22a34280cc11d2406\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255707 kubelet[2755]: I0621 04:45:15.255480 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74c8c97e9a3948a22a34280cc11d2406-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-a-c1262e9e80\" (UID: \"74c8c97e9a3948a22a34280cc11d2406\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255707 kubelet[2755]: I0621 04:45:15.255505 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255707 kubelet[2755]: I0621 04:45:15.255518 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74c8c97e9a3948a22a34280cc11d2406-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-a-c1262e9e80\" (UID: \"74c8c97e9a3948a22a34280cc11d2406\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.255707 kubelet[2755]: I0621 04:45:15.255530 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.419411 kubelet[2755]: I0621 04:45:15.419358 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.419715 kubelet[2755]: E0621 04:45:15.419685 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.497857 containerd[1744]: time="2025-06-21T04:45:15.497816826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-a-c1262e9e80,Uid:74c8c97e9a3948a22a34280cc11d2406,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:15.501343 containerd[1744]: time="2025-06-21T04:45:15.501305602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-a-c1262e9e80,Uid:d3af57be931ce4c1c4e797371575addb,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:15.511988 containerd[1744]: time="2025-06-21T04:45:15.511921548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-a-c1262e9e80,Uid:9825c18ab9809cf781f659f4aa04cc1e,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:15.573713 containerd[1744]: time="2025-06-21T04:45:15.573651837Z" level=info msg="connecting to shim cd1db859bcfed8c8451dec22e29b048486cc756f7ed387a24305844e26ec870e" address="unix:///run/containerd/s/3e4efaead22a6dffd41b549e9ab7115bf659a6ba9c53b51b090470e1a4b7bc1e" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:15.581742 containerd[1744]: time="2025-06-21T04:45:15.581696044Z" level=info msg="connecting to shim 7d281a1951698a1e3cd6c207a2416ed19a8df37151f19f9f78dd14c6e8f3fd33" address="unix:///run/containerd/s/570eadc1afb249d21924a19f3437f7c4bee159acb5531d42ab92198880414591" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:15.604057 containerd[1744]: time="2025-06-21T04:45:15.603659316Z" level=info msg="connecting to shim 967a848da88f07e99814b75c844232adf82b2b3212397bd90ae53547211bac48" address="unix:///run/containerd/s/ce8dd62bcce9cacfb567a7d0100d5c2a3a287272b4b9e975906ef1223694059c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:15.607500 systemd[1]: Started cri-containerd-cd1db859bcfed8c8451dec22e29b048486cc756f7ed387a24305844e26ec870e.scope - libcontainer container cd1db859bcfed8c8451dec22e29b048486cc756f7ed387a24305844e26ec870e. Jun 21 04:45:15.611445 systemd[1]: Started cri-containerd-7d281a1951698a1e3cd6c207a2416ed19a8df37151f19f9f78dd14c6e8f3fd33.scope - libcontainer container 7d281a1951698a1e3cd6c207a2416ed19a8df37151f19f9f78dd14c6e8f3fd33. Jun 21 04:45:15.629384 systemd[1]: Started cri-containerd-967a848da88f07e99814b75c844232adf82b2b3212397bd90ae53547211bac48.scope - libcontainer container 967a848da88f07e99814b75c844232adf82b2b3212397bd90ae53547211bac48. Jun 21 04:45:15.657147 kubelet[2755]: E0621 04:45:15.656519 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-c1262e9e80?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="800ms" Jun 21 04:45:15.665185 containerd[1744]: time="2025-06-21T04:45:15.664492015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-a-c1262e9e80,Uid:74c8c97e9a3948a22a34280cc11d2406,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd1db859bcfed8c8451dec22e29b048486cc756f7ed387a24305844e26ec870e\"" Jun 21 04:45:15.668256 containerd[1744]: time="2025-06-21T04:45:15.668234482Z" level=info msg="CreateContainer within sandbox \"cd1db859bcfed8c8451dec22e29b048486cc756f7ed387a24305844e26ec870e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 04:45:15.689006 containerd[1744]: time="2025-06-21T04:45:15.688988599Z" level=info msg="Container d2362ca5d6b5174f73c06c66831c9224c45f0b5f844bdc4b21d006b56376bf55: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:15.692399 containerd[1744]: time="2025-06-21T04:45:15.692383499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-a-c1262e9e80,Uid:d3af57be931ce4c1c4e797371575addb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d281a1951698a1e3cd6c207a2416ed19a8df37151f19f9f78dd14c6e8f3fd33\"" Jun 21 04:45:15.695022 containerd[1744]: time="2025-06-21T04:45:15.695002035Z" level=info msg="CreateContainer within sandbox \"7d281a1951698a1e3cd6c207a2416ed19a8df37151f19f9f78dd14c6e8f3fd33\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 04:45:15.696946 containerd[1744]: time="2025-06-21T04:45:15.696930942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-a-c1262e9e80,Uid:9825c18ab9809cf781f659f4aa04cc1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"967a848da88f07e99814b75c844232adf82b2b3212397bd90ae53547211bac48\"" Jun 21 04:45:15.698449 containerd[1744]: time="2025-06-21T04:45:15.698415432Z" level=info msg="CreateContainer within sandbox \"967a848da88f07e99814b75c844232adf82b2b3212397bd90ae53547211bac48\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 04:45:15.705775 containerd[1744]: time="2025-06-21T04:45:15.705688390Z" level=info msg="CreateContainer within sandbox \"cd1db859bcfed8c8451dec22e29b048486cc756f7ed387a24305844e26ec870e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d2362ca5d6b5174f73c06c66831c9224c45f0b5f844bdc4b21d006b56376bf55\"" Jun 21 04:45:15.706122 containerd[1744]: time="2025-06-21T04:45:15.706069856Z" level=info msg="StartContainer for \"d2362ca5d6b5174f73c06c66831c9224c45f0b5f844bdc4b21d006b56376bf55\"" Jun 21 04:45:15.706742 containerd[1744]: time="2025-06-21T04:45:15.706720761Z" level=info msg="connecting to shim d2362ca5d6b5174f73c06c66831c9224c45f0b5f844bdc4b21d006b56376bf55" address="unix:///run/containerd/s/3e4efaead22a6dffd41b549e9ab7115bf659a6ba9c53b51b090470e1a4b7bc1e" protocol=ttrpc version=3 Jun 21 04:45:15.717608 containerd[1744]: time="2025-06-21T04:45:15.717588696Z" level=info msg="Container 74f246a8f7a840f7db584fab1441fb1c5ef4e4a29235933b58ef9c55357af3ea: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:15.719252 systemd[1]: Started cri-containerd-d2362ca5d6b5174f73c06c66831c9224c45f0b5f844bdc4b21d006b56376bf55.scope - libcontainer container d2362ca5d6b5174f73c06c66831c9224c45f0b5f844bdc4b21d006b56376bf55. Jun 21 04:45:15.722709 containerd[1744]: time="2025-06-21T04:45:15.722668098Z" level=info msg="Container 23a48b46d98013ebb7e15a5e5d13a7319244ba184cfd6a159682bee0832ea270: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:15.731206 containerd[1744]: time="2025-06-21T04:45:15.731178686Z" level=info msg="CreateContainer within sandbox \"7d281a1951698a1e3cd6c207a2416ed19a8df37151f19f9f78dd14c6e8f3fd33\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74f246a8f7a840f7db584fab1441fb1c5ef4e4a29235933b58ef9c55357af3ea\"" Jun 21 04:45:15.731708 containerd[1744]: time="2025-06-21T04:45:15.731589992Z" level=info msg="StartContainer for \"74f246a8f7a840f7db584fab1441fb1c5ef4e4a29235933b58ef9c55357af3ea\"" Jun 21 04:45:15.732582 containerd[1744]: time="2025-06-21T04:45:15.732552626Z" level=info msg="connecting to shim 74f246a8f7a840f7db584fab1441fb1c5ef4e4a29235933b58ef9c55357af3ea" address="unix:///run/containerd/s/570eadc1afb249d21924a19f3437f7c4bee159acb5531d42ab92198880414591" protocol=ttrpc version=3 Jun 21 04:45:15.744208 containerd[1744]: time="2025-06-21T04:45:15.744188136Z" level=info msg="CreateContainer within sandbox \"967a848da88f07e99814b75c844232adf82b2b3212397bd90ae53547211bac48\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"23a48b46d98013ebb7e15a5e5d13a7319244ba184cfd6a159682bee0832ea270\"" Jun 21 04:45:15.744685 containerd[1744]: time="2025-06-21T04:45:15.744612454Z" level=info msg="StartContainer for \"23a48b46d98013ebb7e15a5e5d13a7319244ba184cfd6a159682bee0832ea270\"" Jun 21 04:45:15.745425 containerd[1744]: time="2025-06-21T04:45:15.745405973Z" level=info msg="connecting to shim 23a48b46d98013ebb7e15a5e5d13a7319244ba184cfd6a159682bee0832ea270" address="unix:///run/containerd/s/ce8dd62bcce9cacfb567a7d0100d5c2a3a287272b4b9e975906ef1223694059c" protocol=ttrpc version=3 Jun 21 04:45:15.746384 systemd[1]: Started cri-containerd-74f246a8f7a840f7db584fab1441fb1c5ef4e4a29235933b58ef9c55357af3ea.scope - libcontainer container 74f246a8f7a840f7db584fab1441fb1c5ef4e4a29235933b58ef9c55357af3ea. Jun 21 04:45:15.765410 systemd[1]: Started cri-containerd-23a48b46d98013ebb7e15a5e5d13a7319244ba184cfd6a159682bee0832ea270.scope - libcontainer container 23a48b46d98013ebb7e15a5e5d13a7319244ba184cfd6a159682bee0832ea270. Jun 21 04:45:15.777354 containerd[1744]: time="2025-06-21T04:45:15.777329527Z" level=info msg="StartContainer for \"d2362ca5d6b5174f73c06c66831c9224c45f0b5f844bdc4b21d006b56376bf55\" returns successfully" Jun 21 04:45:15.821195 kubelet[2755]: I0621 04:45:15.821180 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.821565 kubelet[2755]: E0621 04:45:15.821539 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:15.822891 containerd[1744]: time="2025-06-21T04:45:15.822872559Z" level=info msg="StartContainer for \"74f246a8f7a840f7db584fab1441fb1c5ef4e4a29235933b58ef9c55357af3ea\" returns successfully" Jun 21 04:45:15.858535 containerd[1744]: time="2025-06-21T04:45:15.858479065Z" level=info msg="StartContainer for \"23a48b46d98013ebb7e15a5e5d13a7319244ba184cfd6a159682bee0832ea270\" returns successfully" Jun 21 04:45:16.092765 kubelet[2755]: E0621 04:45:16.092716 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:16.105346 kubelet[2755]: E0621 04:45:16.105236 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:16.110470 kubelet[2755]: E0621 04:45:16.110348 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:16.623829 kubelet[2755]: I0621 04:45:16.623391 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.104467 kubelet[2755]: E0621 04:45:17.104429 2755 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.110395 kubelet[2755]: E0621 04:45:17.110347 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.111516 kubelet[2755]: E0621 04:45:17.111500 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.173604 kubelet[2755]: I0621 04:45:17.173546 2755 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.173604 kubelet[2755]: E0621 04:45:17.173569 2755 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.0-a-c1262e9e80\": node \"ci-4372.0.0-a-c1262e9e80\" not found" Jun 21 04:45:17.254844 kubelet[2755]: I0621 04:45:17.254817 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.267614 kubelet[2755]: E0621 04:45:17.267551 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-a-c1262e9e80\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.267614 kubelet[2755]: I0621 04:45:17.267572 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.270742 kubelet[2755]: E0621 04:45:17.269056 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.270742 kubelet[2755]: I0621 04:45:17.269078 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:17.270975 kubelet[2755]: E0621 04:45:17.270951 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.0-a-c1262e9e80\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:18.037892 kubelet[2755]: I0621 04:45:18.037847 2755 apiserver.go:52] "Watching apiserver" Jun 21 04:45:18.055149 kubelet[2755]: I0621 04:45:18.055118 2755 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 04:45:18.759494 kubelet[2755]: I0621 04:45:18.759469 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:18.765478 kubelet[2755]: W0621 04:45:18.765405 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:19.260521 systemd[1]: Reload requested from client PID 3058 ('systemctl') (unit session-9.scope)... Jun 21 04:45:19.260535 systemd[1]: Reloading... Jun 21 04:45:19.328164 zram_generator::config[3103]: No configuration found. Jun 21 04:45:20.174045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:45:20.272067 systemd[1]: Reloading finished in 1011 ms. Jun 21 04:45:20.291247 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:20.298904 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 04:45:20.299083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:20.299124 systemd[1]: kubelet.service: Consumed 613ms CPU time, 128.4M memory peak. Jun 21 04:45:20.300430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:30.006334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:30.014419 (kubelet)[3171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 04:45:30.051769 kubelet[3171]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:45:30.051769 kubelet[3171]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 04:45:30.051769 kubelet[3171]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:45:30.052465 kubelet[3171]: I0621 04:45:30.052064 3171 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 04:45:30.061742 kubelet[3171]: I0621 04:45:30.061718 3171 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 04:45:30.061742 kubelet[3171]: I0621 04:45:30.061737 3171 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 04:45:30.061944 kubelet[3171]: I0621 04:45:30.061932 3171 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 04:45:30.064608 kubelet[3171]: I0621 04:45:30.064515 3171 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 04:45:30.066290 kubelet[3171]: I0621 04:45:30.066271 3171 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:45:30.068856 kubelet[3171]: I0621 04:45:30.068841 3171 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 04:45:30.070859 kubelet[3171]: I0621 04:45:30.070847 3171 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 04:45:30.071071 kubelet[3171]: I0621 04:45:30.071051 3171 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 04:45:30.071230 kubelet[3171]: I0621 04:45:30.071104 3171 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-a-c1262e9e80","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 04:45:30.071329 kubelet[3171]: I0621 04:45:30.071323 3171 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 04:45:30.071354 kubelet[3171]: I0621 04:45:30.071351 3171 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 04:45:30.071404 kubelet[3171]: I0621 04:45:30.071401 3171 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:45:30.071521 kubelet[3171]: I0621 04:45:30.071516 3171 kubelet.go:446] "Attempting to sync node with API server" Jun 21 04:45:30.071567 kubelet[3171]: I0621 04:45:30.071562 3171 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 04:45:30.071609 kubelet[3171]: I0621 04:45:30.071606 3171 kubelet.go:352] "Adding apiserver pod source" Jun 21 04:45:30.071643 kubelet[3171]: I0621 04:45:30.071639 3171 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 04:45:30.072751 kubelet[3171]: I0621 04:45:30.072732 3171 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 04:45:30.073100 kubelet[3171]: I0621 04:45:30.073088 3171 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 04:45:30.074461 kubelet[3171]: I0621 04:45:30.074445 3171 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 04:45:30.074524 kubelet[3171]: I0621 04:45:30.074473 3171 server.go:1287] "Started kubelet" Jun 21 04:45:30.083659 kubelet[3171]: I0621 04:45:30.083636 3171 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 04:45:30.098181 kubelet[3171]: I0621 04:45:30.096275 3171 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 04:45:30.098282 kubelet[3171]: I0621 04:45:30.098267 3171 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 04:45:30.101204 kubelet[3171]: I0621 04:45:30.100700 3171 server.go:479] "Adding debug handlers to kubelet server" Jun 21 04:45:30.101355 kubelet[3171]: I0621 04:45:30.101340 3171 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 04:45:30.102152 kubelet[3171]: E0621 04:45:30.101571 3171 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-c1262e9e80\" not found" Jun 21 04:45:30.103975 kubelet[3171]: I0621 04:45:30.103856 3171 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 04:45:30.104853 kubelet[3171]: I0621 04:45:30.104278 3171 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 04:45:30.107152 kubelet[3171]: I0621 04:45:30.106801 3171 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 04:45:30.107152 kubelet[3171]: I0621 04:45:30.106890 3171 reconciler.go:26] "Reconciler: start to sync state" Jun 21 04:45:30.111161 kubelet[3171]: I0621 04:45:30.110161 3171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 04:45:30.112206 kubelet[3171]: I0621 04:45:30.112187 3171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 04:45:30.112263 kubelet[3171]: I0621 04:45:30.112213 3171 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 04:45:30.112263 kubelet[3171]: I0621 04:45:30.112233 3171 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 04:45:30.112263 kubelet[3171]: I0621 04:45:30.112239 3171 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 04:45:30.112356 kubelet[3171]: E0621 04:45:30.112273 3171 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 04:45:30.120240 kubelet[3171]: E0621 04:45:30.120223 3171 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 04:45:30.120797 kubelet[3171]: I0621 04:45:30.120542 3171 factory.go:221] Registration of the containerd container factory successfully Jun 21 04:45:30.121154 kubelet[3171]: I0621 04:45:30.120947 3171 factory.go:221] Registration of the systemd container factory successfully Jun 21 04:45:30.121287 kubelet[3171]: I0621 04:45:30.121272 3171 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 04:45:30.187389 kubelet[3171]: I0621 04:45:30.187375 3171 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 04:45:30.187389 kubelet[3171]: I0621 04:45:30.187388 3171 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 04:45:30.187473 kubelet[3171]: I0621 04:45:30.187402 3171 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:45:30.187535 kubelet[3171]: I0621 04:45:30.187524 3171 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 04:45:30.187558 kubelet[3171]: I0621 04:45:30.187537 3171 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 04:45:30.187558 kubelet[3171]: I0621 04:45:30.187552 3171 policy_none.go:49] "None policy: Start" Jun 21 04:45:30.187596 kubelet[3171]: I0621 04:45:30.187562 3171 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 04:45:30.187596 kubelet[3171]: I0621 04:45:30.187570 3171 state_mem.go:35] "Initializing new in-memory state store" Jun 21 04:45:30.187663 kubelet[3171]: I0621 04:45:30.187656 3171 state_mem.go:75] "Updated machine memory state" Jun 21 04:45:30.199722 kubelet[3171]: I0621 04:45:30.199699 3171 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 04:45:30.199848 kubelet[3171]: I0621 04:45:30.199837 3171 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 04:45:30.199879 kubelet[3171]: I0621 04:45:30.199851 3171 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 04:45:30.201636 kubelet[3171]: I0621 04:45:30.201620 3171 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 04:45:30.212628 kubelet[3171]: E0621 04:45:30.212610 3171 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 04:45:30.216276 kubelet[3171]: I0621 04:45:30.216250 3171 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216609 kubelet[3171]: I0621 04:45:30.216588 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216663 kubelet[3171]: I0621 04:45:30.216618 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9825c18ab9809cf781f659f4aa04cc1e-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-a-c1262e9e80\" (UID: \"9825c18ab9809cf781f659f4aa04cc1e\") " pod="kube-system/kube-scheduler-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216663 kubelet[3171]: I0621 04:45:30.216637 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74c8c97e9a3948a22a34280cc11d2406-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-a-c1262e9e80\" (UID: \"74c8c97e9a3948a22a34280cc11d2406\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216663 kubelet[3171]: I0621 04:45:30.216653 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216733 kubelet[3171]: I0621 04:45:30.216673 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216733 kubelet[3171]: I0621 04:45:30.216688 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216733 kubelet[3171]: I0621 04:45:30.216704 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74c8c97e9a3948a22a34280cc11d2406-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-a-c1262e9e80\" (UID: \"74c8c97e9a3948a22a34280cc11d2406\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216733 kubelet[3171]: I0621 04:45:30.216721 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74c8c97e9a3948a22a34280cc11d2406-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-a-c1262e9e80\" (UID: \"74c8c97e9a3948a22a34280cc11d2406\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216820 kubelet[3171]: I0621 04:45:30.216737 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3af57be931ce4c1c4e797371575addb-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-a-c1262e9e80\" (UID: \"d3af57be931ce4c1c4e797371575addb\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.216901 kubelet[3171]: I0621 04:45:30.216889 3171 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.217130 kubelet[3171]: I0621 04:45:30.217114 3171 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.230445 kubelet[3171]: W0621 04:45:30.229421 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:30.232901 kubelet[3171]: W0621 04:45:30.232882 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:30.240407 kubelet[3171]: W0621 04:45:30.240393 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:30.240527 kubelet[3171]: E0621 04:45:30.240516 3171 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-a-c1262e9e80\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.302477 kubelet[3171]: I0621 04:45:30.302391 3171 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.314993 kubelet[3171]: I0621 04:45:30.314976 3171 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.315070 kubelet[3171]: I0621 04:45:30.315024 3171 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-a-c1262e9e80" Jun 21 04:45:30.315070 kubelet[3171]: I0621 04:45:30.315044 3171 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 04:45:30.315336 containerd[1744]: time="2025-06-21T04:45:30.315309150Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 04:45:30.315703 kubelet[3171]: I0621 04:45:30.315465 3171 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 04:45:31.077980 kubelet[3171]: I0621 04:45:31.077394 3171 apiserver.go:52] "Watching apiserver" Jun 21 04:45:31.089547 systemd[1]: Created slice kubepods-besteffort-podc7f9716a_d53f_4fa9_af0b_2fb52cad1e2c.slice - libcontainer container kubepods-besteffort-podc7f9716a_d53f_4fa9_af0b_2fb52cad1e2c.slice. Jun 21 04:45:31.098276 kubelet[3171]: I0621 04:45:31.098208 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.0-a-c1262e9e80" podStartSLOduration=1.098196531 podStartE2EDuration="1.098196531s" podCreationTimestamp="2025-06-21 04:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:31.098114566 +0000 UTC m=+1.080989422" watchObservedRunningTime="2025-06-21 04:45:31.098196531 +0000 UTC m=+1.081071374" Jun 21 04:45:31.107474 kubelet[3171]: I0621 04:45:31.107449 3171 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 04:45:31.122246 kubelet[3171]: I0621 04:45:31.122063 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.0-a-c1262e9e80" podStartSLOduration=13.122048607 podStartE2EDuration="13.122048607s" podCreationTimestamp="2025-06-21 04:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:31.111875892 +0000 UTC m=+1.094750743" watchObservedRunningTime="2025-06-21 04:45:31.122048607 +0000 UTC m=+1.104923480" Jun 21 04:45:31.122246 kubelet[3171]: I0621 04:45:31.122152 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-c1262e9e80" podStartSLOduration=1.122145282 podStartE2EDuration="1.122145282s" podCreationTimestamp="2025-06-21 04:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:31.121770325 +0000 UTC m=+1.104645179" watchObservedRunningTime="2025-06-21 04:45:31.122145282 +0000 UTC m=+1.105020138" Jun 21 04:45:31.122722 kubelet[3171]: I0621 04:45:31.122698 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c-xtables-lock\") pod \"kube-proxy-9s2qv\" (UID: \"c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c\") " pod="kube-system/kube-proxy-9s2qv" Jun 21 04:45:31.122778 kubelet[3171]: I0621 04:45:31.122731 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c-lib-modules\") pod \"kube-proxy-9s2qv\" (UID: \"c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c\") " pod="kube-system/kube-proxy-9s2qv" Jun 21 04:45:31.122778 kubelet[3171]: I0621 04:45:31.122747 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c-kube-proxy\") pod \"kube-proxy-9s2qv\" (UID: \"c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c\") " pod="kube-system/kube-proxy-9s2qv" Jun 21 04:45:31.122778 kubelet[3171]: I0621 04:45:31.122765 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgzf\" (UniqueName: \"kubernetes.io/projected/c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c-kube-api-access-nhgzf\") pod \"kube-proxy-9s2qv\" (UID: \"c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c\") " pod="kube-system/kube-proxy-9s2qv" Jun 21 04:45:31.396651 containerd[1744]: time="2025-06-21T04:45:31.396606465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9s2qv,Uid:c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:31.468433 containerd[1744]: time="2025-06-21T04:45:31.468260513Z" level=info msg="connecting to shim b3ae73fac605c7a0d6276c523f16f5a420dd1eb34ac9ac5fcd86674bf7b740a6" address="unix:///run/containerd/s/5fafa998500880675477a926d2034e26f1ba54199940f735ecb5839421718b53" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:31.497426 systemd[1]: Started cri-containerd-b3ae73fac605c7a0d6276c523f16f5a420dd1eb34ac9ac5fcd86674bf7b740a6.scope - libcontainer container b3ae73fac605c7a0d6276c523f16f5a420dd1eb34ac9ac5fcd86674bf7b740a6. Jun 21 04:45:31.524333 containerd[1744]: time="2025-06-21T04:45:31.524307135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9s2qv,Uid:c7f9716a-d53f-4fa9-af0b-2fb52cad1e2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3ae73fac605c7a0d6276c523f16f5a420dd1eb34ac9ac5fcd86674bf7b740a6\"" Jun 21 04:45:31.526954 containerd[1744]: time="2025-06-21T04:45:31.526923768Z" level=info msg="CreateContainer within sandbox \"b3ae73fac605c7a0d6276c523f16f5a420dd1eb34ac9ac5fcd86674bf7b740a6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 04:45:31.671686 kubelet[3171]: W0621 04:45:31.671201 3171 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4372.0.0-a-c1262e9e80" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4372.0.0-a-c1262e9e80' and this object Jun 21 04:45:31.671686 kubelet[3171]: E0621 04:45:31.671240 3171 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4372.0.0-a-c1262e9e80\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4372.0.0-a-c1262e9e80' and this object" logger="UnhandledError" Jun 21 04:45:31.672048 systemd[1]: Created slice kubepods-besteffort-pod584e1ec4_aa37_49ff_9422_37342ffbd605.slice - libcontainer container kubepods-besteffort-pod584e1ec4_aa37_49ff_9422_37342ffbd605.slice. Jun 21 04:45:31.672460 kubelet[3171]: I0621 04:45:31.672350 3171 status_manager.go:890] "Failed to get status for pod" podUID="584e1ec4-aa37-49ff-9422-37342ffbd605" pod="tigera-operator/tigera-operator-68f7c7984d-2d954" err="pods \"tigera-operator-68f7c7984d-2d954\" is forbidden: User \"system:node:ci-4372.0.0-a-c1262e9e80\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4372.0.0-a-c1262e9e80' and this object" Jun 21 04:45:31.696710 containerd[1744]: time="2025-06-21T04:45:31.696684968Z" level=info msg="Container b0c970e0789b4b58a3e871dcc4d2fd618b301fddacbb94c1cfc734ee54d1108a: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:31.703594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111563045.mount: Deactivated successfully. Jun 21 04:45:31.725806 kubelet[3171]: I0621 04:45:31.725782 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/584e1ec4-aa37-49ff-9422-37342ffbd605-var-lib-calico\") pod \"tigera-operator-68f7c7984d-2d954\" (UID: \"584e1ec4-aa37-49ff-9422-37342ffbd605\") " pod="tigera-operator/tigera-operator-68f7c7984d-2d954" Jun 21 04:45:31.725877 kubelet[3171]: I0621 04:45:31.725812 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4db49\" (UniqueName: \"kubernetes.io/projected/584e1ec4-aa37-49ff-9422-37342ffbd605-kube-api-access-4db49\") pod \"tigera-operator-68f7c7984d-2d954\" (UID: \"584e1ec4-aa37-49ff-9422-37342ffbd605\") " pod="tigera-operator/tigera-operator-68f7c7984d-2d954" Jun 21 04:45:31.849408 containerd[1744]: time="2025-06-21T04:45:31.849382347Z" level=info msg="CreateContainer within sandbox \"b3ae73fac605c7a0d6276c523f16f5a420dd1eb34ac9ac5fcd86674bf7b740a6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b0c970e0789b4b58a3e871dcc4d2fd618b301fddacbb94c1cfc734ee54d1108a\"" Jun 21 04:45:31.849836 containerd[1744]: time="2025-06-21T04:45:31.849811935Z" level=info msg="StartContainer for \"b0c970e0789b4b58a3e871dcc4d2fd618b301fddacbb94c1cfc734ee54d1108a\"" Jun 21 04:45:31.851361 containerd[1744]: time="2025-06-21T04:45:31.851325888Z" level=info msg="connecting to shim b0c970e0789b4b58a3e871dcc4d2fd618b301fddacbb94c1cfc734ee54d1108a" address="unix:///run/containerd/s/5fafa998500880675477a926d2034e26f1ba54199940f735ecb5839421718b53" protocol=ttrpc version=3 Jun 21 04:45:31.870299 systemd[1]: Started cri-containerd-b0c970e0789b4b58a3e871dcc4d2fd618b301fddacbb94c1cfc734ee54d1108a.scope - libcontainer container b0c970e0789b4b58a3e871dcc4d2fd618b301fddacbb94c1cfc734ee54d1108a. Jun 21 04:45:31.897937 containerd[1744]: time="2025-06-21T04:45:31.897900526Z" level=info msg="StartContainer for \"b0c970e0789b4b58a3e871dcc4d2fd618b301fddacbb94c1cfc734ee54d1108a\" returns successfully" Jun 21 04:45:31.976673 containerd[1744]: time="2025-06-21T04:45:31.976549287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-2d954,Uid:584e1ec4-aa37-49ff-9422-37342ffbd605,Namespace:tigera-operator,Attempt:0,}" Jun 21 04:45:32.917901 containerd[1744]: time="2025-06-21T04:45:32.917866665Z" level=info msg="connecting to shim 173591fc1fc14ea72e9882086c88ed5942218be2046966fbced4ec7d3a626ee6" address="unix:///run/containerd/s/be676f9070699e13cdfeeb9eb5c6747b594e899769912a9932b26be36924c2df" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:32.941278 systemd[1]: Started cri-containerd-173591fc1fc14ea72e9882086c88ed5942218be2046966fbced4ec7d3a626ee6.scope - libcontainer container 173591fc1fc14ea72e9882086c88ed5942218be2046966fbced4ec7d3a626ee6. Jun 21 04:45:32.989079 containerd[1744]: time="2025-06-21T04:45:32.989032435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-2d954,Uid:584e1ec4-aa37-49ff-9422-37342ffbd605,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"173591fc1fc14ea72e9882086c88ed5942218be2046966fbced4ec7d3a626ee6\"" Jun 21 04:45:32.991108 containerd[1744]: time="2025-06-21T04:45:32.991050354Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 21 04:45:33.172800 kubelet[3171]: I0621 04:45:33.172711 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9s2qv" podStartSLOduration=3.17269531 podStartE2EDuration="3.17269531s" podCreationTimestamp="2025-06-21 04:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:32.883358388 +0000 UTC m=+2.866233244" watchObservedRunningTime="2025-06-21 04:45:33.17269531 +0000 UTC m=+3.155570163" Jun 21 04:45:34.287816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105900808.mount: Deactivated successfully. Jun 21 04:45:34.665894 containerd[1744]: time="2025-06-21T04:45:34.665857485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:34.668065 containerd[1744]: time="2025-06-21T04:45:34.668031193Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 21 04:45:34.670180 containerd[1744]: time="2025-06-21T04:45:34.670157569Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:34.673617 containerd[1744]: time="2025-06-21T04:45:34.673578247Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:34.674402 containerd[1744]: time="2025-06-21T04:45:34.674100207Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 1.682769247s" Jun 21 04:45:34.674402 containerd[1744]: time="2025-06-21T04:45:34.674125804Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 21 04:45:34.675826 containerd[1744]: time="2025-06-21T04:45:34.675801894Z" level=info msg="CreateContainer within sandbox \"173591fc1fc14ea72e9882086c88ed5942218be2046966fbced4ec7d3a626ee6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 21 04:45:34.693153 containerd[1744]: time="2025-06-21T04:45:34.691429244Z" level=info msg="Container 393922021b987a2c0bc5b13e00053a5f2e2c44f8e528fd4b06892aa6ee7651c8: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:34.703781 containerd[1744]: time="2025-06-21T04:45:34.703760703Z" level=info msg="CreateContainer within sandbox \"173591fc1fc14ea72e9882086c88ed5942218be2046966fbced4ec7d3a626ee6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"393922021b987a2c0bc5b13e00053a5f2e2c44f8e528fd4b06892aa6ee7651c8\"" Jun 21 04:45:34.704066 containerd[1744]: time="2025-06-21T04:45:34.704025895Z" level=info msg="StartContainer for \"393922021b987a2c0bc5b13e00053a5f2e2c44f8e528fd4b06892aa6ee7651c8\"" Jun 21 04:45:34.704956 containerd[1744]: time="2025-06-21T04:45:34.704875310Z" level=info msg="connecting to shim 393922021b987a2c0bc5b13e00053a5f2e2c44f8e528fd4b06892aa6ee7651c8" address="unix:///run/containerd/s/be676f9070699e13cdfeeb9eb5c6747b594e899769912a9932b26be36924c2df" protocol=ttrpc version=3 Jun 21 04:45:34.720363 systemd[1]: Started cri-containerd-393922021b987a2c0bc5b13e00053a5f2e2c44f8e528fd4b06892aa6ee7651c8.scope - libcontainer container 393922021b987a2c0bc5b13e00053a5f2e2c44f8e528fd4b06892aa6ee7651c8. Jun 21 04:45:34.741633 containerd[1744]: time="2025-06-21T04:45:34.741586121Z" level=info msg="StartContainer for \"393922021b987a2c0bc5b13e00053a5f2e2c44f8e528fd4b06892aa6ee7651c8\" returns successfully" Jun 21 04:45:35.481286 kubelet[3171]: I0621 04:45:35.481239 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-2d954" podStartSLOduration=2.796588272 podStartE2EDuration="4.481221694s" podCreationTimestamp="2025-06-21 04:45:31 +0000 UTC" firstStartedPulling="2025-06-21 04:45:32.990035487 +0000 UTC m=+2.972910343" lastFinishedPulling="2025-06-21 04:45:34.674668907 +0000 UTC m=+4.657543765" observedRunningTime="2025-06-21 04:45:35.181811 +0000 UTC m=+5.164685879" watchObservedRunningTime="2025-06-21 04:45:35.481221694 +0000 UTC m=+5.464096622" Jun 21 04:45:40.101446 sudo[2185]: pam_unix(sudo:session): session closed for user root Jun 21 04:45:40.200842 sshd[2184]: Connection closed by 10.200.16.10 port 59662 Jun 21 04:45:40.201307 sshd-session[2182]: pam_unix(sshd:session): session closed for user core Jun 21 04:45:40.206090 systemd[1]: sshd@6-10.200.8.43:22-10.200.16.10:59662.service: Deactivated successfully. Jun 21 04:45:40.207926 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 04:45:40.209932 systemd[1]: session-9.scope: Consumed 3.194s CPU time, 223M memory peak. Jun 21 04:45:40.212653 systemd-logind[1719]: Session 9 logged out. Waiting for processes to exit. Jun 21 04:45:40.215775 systemd-logind[1719]: Removed session 9. Jun 21 04:45:42.970949 systemd[1]: Created slice kubepods-besteffort-pod15666720_19a5_419c_b83b_123f59ff7228.slice - libcontainer container kubepods-besteffort-pod15666720_19a5_419c_b83b_123f59ff7228.slice. Jun 21 04:45:42.991412 kubelet[3171]: I0621 04:45:42.991360 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/15666720-19a5-419c-b83b-123f59ff7228-typha-certs\") pod \"calico-typha-75cd7dd87b-s5ztc\" (UID: \"15666720-19a5-419c-b83b-123f59ff7228\") " pod="calico-system/calico-typha-75cd7dd87b-s5ztc" Jun 21 04:45:42.991877 kubelet[3171]: I0621 04:45:42.991611 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15666720-19a5-419c-b83b-123f59ff7228-tigera-ca-bundle\") pod \"calico-typha-75cd7dd87b-s5ztc\" (UID: \"15666720-19a5-419c-b83b-123f59ff7228\") " pod="calico-system/calico-typha-75cd7dd87b-s5ztc" Jun 21 04:45:42.992195 kubelet[3171]: I0621 04:45:42.992088 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhbm\" (UniqueName: \"kubernetes.io/projected/15666720-19a5-419c-b83b-123f59ff7228-kube-api-access-fxhbm\") pod \"calico-typha-75cd7dd87b-s5ztc\" (UID: \"15666720-19a5-419c-b83b-123f59ff7228\") " pod="calico-system/calico-typha-75cd7dd87b-s5ztc" Jun 21 04:45:43.256070 systemd[1]: Created slice kubepods-besteffort-pod81a56cce_38e3_4a40_84c1_a5c98865865d.slice - libcontainer container kubepods-besteffort-pod81a56cce_38e3_4a40_84c1_a5c98865865d.slice. Jun 21 04:45:43.289016 containerd[1744]: time="2025-06-21T04:45:43.288971739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cd7dd87b-s5ztc,Uid:15666720-19a5-419c-b83b-123f59ff7228,Namespace:calico-system,Attempt:0,}" Jun 21 04:45:43.294122 kubelet[3171]: I0621 04:45:43.294098 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-cni-net-dir\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294264 kubelet[3171]: I0621 04:45:43.294129 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/81a56cce-38e3-4a40-84c1-a5c98865865d-node-certs\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294264 kubelet[3171]: I0621 04:45:43.294155 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-xtables-lock\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294264 kubelet[3171]: I0621 04:45:43.294171 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-lib-modules\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294264 kubelet[3171]: I0621 04:45:43.294185 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-var-run-calico\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294264 kubelet[3171]: I0621 04:45:43.294201 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4ssh\" (UniqueName: \"kubernetes.io/projected/81a56cce-38e3-4a40-84c1-a5c98865865d-kube-api-access-d4ssh\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294409 kubelet[3171]: I0621 04:45:43.294217 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a56cce-38e3-4a40-84c1-a5c98865865d-tigera-ca-bundle\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294409 kubelet[3171]: I0621 04:45:43.294234 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-cni-log-dir\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294409 kubelet[3171]: I0621 04:45:43.294266 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-cni-bin-dir\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294409 kubelet[3171]: I0621 04:45:43.294283 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-policysync\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294409 kubelet[3171]: I0621 04:45:43.294308 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-flexvol-driver-host\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.294478 kubelet[3171]: I0621 04:45:43.294331 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81a56cce-38e3-4a40-84c1-a5c98865865d-var-lib-calico\") pod \"calico-node-h574b\" (UID: \"81a56cce-38e3-4a40-84c1-a5c98865865d\") " pod="calico-system/calico-node-h574b" Jun 21 04:45:43.328568 containerd[1744]: time="2025-06-21T04:45:43.328538109Z" level=info msg="connecting to shim 592cc408f22ff7aaed3bc72015c00bde245ab2dae18b971e57b0f79707d8b606" address="unix:///run/containerd/s/320bc6b93544d1c5c964260be5538098ce1829b117e981f8ec69b48a1fba376c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:43.347265 systemd[1]: Started cri-containerd-592cc408f22ff7aaed3bc72015c00bde245ab2dae18b971e57b0f79707d8b606.scope - libcontainer container 592cc408f22ff7aaed3bc72015c00bde245ab2dae18b971e57b0f79707d8b606. Jun 21 04:45:43.380946 containerd[1744]: time="2025-06-21T04:45:43.380913175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cd7dd87b-s5ztc,Uid:15666720-19a5-419c-b83b-123f59ff7228,Namespace:calico-system,Attempt:0,} returns sandbox id \"592cc408f22ff7aaed3bc72015c00bde245ab2dae18b971e57b0f79707d8b606\"" Jun 21 04:45:43.381976 containerd[1744]: time="2025-06-21T04:45:43.381906898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 21 04:45:43.403831 kubelet[3171]: E0621 04:45:43.403793 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.403831 kubelet[3171]: W0621 04:45:43.403812 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.403921 kubelet[3171]: E0621 04:45:43.403886 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.410333 kubelet[3171]: E0621 04:45:43.410307 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.410333 kubelet[3171]: W0621 04:45:43.410333 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.410435 kubelet[3171]: E0621 04:45:43.410348 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.541950 kubelet[3171]: E0621 04:45:43.541344 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575w9" podUID="f516b2d9-b3a6-47e2-926b-c9ca81ee80c8" Jun 21 04:45:43.560256 containerd[1744]: time="2025-06-21T04:45:43.560222538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h574b,Uid:81a56cce-38e3-4a40-84c1-a5c98865865d,Namespace:calico-system,Attempt:0,}" Jun 21 04:45:43.590928 kubelet[3171]: E0621 04:45:43.590914 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.590928 kubelet[3171]: W0621 04:45:43.590928 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591050 kubelet[3171]: E0621 04:45:43.590941 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591050 kubelet[3171]: E0621 04:45:43.591040 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591050 kubelet[3171]: W0621 04:45:43.591045 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591167 kubelet[3171]: E0621 04:45:43.591052 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591167 kubelet[3171]: E0621 04:45:43.591132 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591167 kubelet[3171]: W0621 04:45:43.591154 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591167 kubelet[3171]: E0621 04:45:43.591161 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591322 kubelet[3171]: E0621 04:45:43.591296 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591322 kubelet[3171]: W0621 04:45:43.591317 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591385 kubelet[3171]: E0621 04:45:43.591323 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591413 kubelet[3171]: E0621 04:45:43.591407 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591451 kubelet[3171]: W0621 04:45:43.591412 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591451 kubelet[3171]: E0621 04:45:43.591418 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591531 kubelet[3171]: E0621 04:45:43.591500 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591531 kubelet[3171]: W0621 04:45:43.591506 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591531 kubelet[3171]: E0621 04:45:43.591512 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591611 kubelet[3171]: E0621 04:45:43.591600 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591611 kubelet[3171]: W0621 04:45:43.591606 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591656 kubelet[3171]: E0621 04:45:43.591614 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591712 kubelet[3171]: E0621 04:45:43.591691 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591712 kubelet[3171]: W0621 04:45:43.591702 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591712 kubelet[3171]: E0621 04:45:43.591707 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591813 kubelet[3171]: E0621 04:45:43.591792 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591813 kubelet[3171]: W0621 04:45:43.591810 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591873 kubelet[3171]: E0621 04:45:43.591816 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.591896 kubelet[3171]: E0621 04:45:43.591891 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.591923 kubelet[3171]: W0621 04:45:43.591895 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.591923 kubelet[3171]: E0621 04:45:43.591902 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.592034 kubelet[3171]: E0621 04:45:43.592009 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.592034 kubelet[3171]: W0621 04:45:43.592018 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.592034 kubelet[3171]: E0621 04:45:43.592026 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.592761 kubelet[3171]: E0621 04:45:43.592124 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.592761 kubelet[3171]: W0621 04:45:43.592131 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.592761 kubelet[3171]: E0621 04:45:43.592148 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.592761 kubelet[3171]: E0621 04:45:43.592228 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.592761 kubelet[3171]: W0621 04:45:43.592232 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.592761 kubelet[3171]: E0621 04:45:43.592238 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.592761 kubelet[3171]: E0621 04:45:43.592304 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.592761 kubelet[3171]: W0621 04:45:43.592309 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.592761 kubelet[3171]: E0621 04:45:43.592314 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.592761 kubelet[3171]: E0621 04:45:43.592381 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.592996 kubelet[3171]: W0621 04:45:43.592385 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.592996 kubelet[3171]: E0621 04:45:43.592391 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.592996 kubelet[3171]: E0621 04:45:43.592459 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.592996 kubelet[3171]: W0621 04:45:43.592464 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.592996 kubelet[3171]: E0621 04:45:43.592469 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.592996 kubelet[3171]: E0621 04:45:43.592561 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.592996 kubelet[3171]: W0621 04:45:43.592565 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.592996 kubelet[3171]: E0621 04:45:43.592571 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.592996 kubelet[3171]: E0621 04:45:43.592637 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.592996 kubelet[3171]: W0621 04:45:43.592641 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.593219 kubelet[3171]: E0621 04:45:43.592646 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.593219 kubelet[3171]: E0621 04:45:43.592709 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.593219 kubelet[3171]: W0621 04:45:43.592713 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.593219 kubelet[3171]: E0621 04:45:43.592718 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.593219 kubelet[3171]: E0621 04:45:43.592781 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.593219 kubelet[3171]: W0621 04:45:43.592786 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.593219 kubelet[3171]: E0621 04:45:43.592794 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.597317 kubelet[3171]: E0621 04:45:43.597301 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.597317 kubelet[3171]: W0621 04:45:43.597316 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.597497 kubelet[3171]: E0621 04:45:43.597329 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.597497 kubelet[3171]: I0621 04:45:43.597436 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f516b2d9-b3a6-47e2-926b-c9ca81ee80c8-registration-dir\") pod \"csi-node-driver-575w9\" (UID: \"f516b2d9-b3a6-47e2-926b-c9ca81ee80c8\") " pod="calico-system/csi-node-driver-575w9" Jun 21 04:45:43.597867 kubelet[3171]: E0621 04:45:43.597711 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.597867 kubelet[3171]: W0621 04:45:43.597723 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.597867 kubelet[3171]: E0621 04:45:43.597807 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.598208 kubelet[3171]: E0621 04:45:43.598089 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.598208 kubelet[3171]: W0621 04:45:43.598100 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.598208 kubelet[3171]: E0621 04:45:43.598111 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.598208 kubelet[3171]: I0621 04:45:43.597827 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f516b2d9-b3a6-47e2-926b-c9ca81ee80c8-kubelet-dir\") pod \"csi-node-driver-575w9\" (UID: \"f516b2d9-b3a6-47e2-926b-c9ca81ee80c8\") " pod="calico-system/csi-node-driver-575w9" Jun 21 04:45:43.598896 kubelet[3171]: E0621 04:45:43.598691 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.598896 kubelet[3171]: W0621 04:45:43.598700 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.598896 kubelet[3171]: E0621 04:45:43.598711 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.599781 kubelet[3171]: E0621 04:45:43.599755 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.599781 kubelet[3171]: W0621 04:45:43.599768 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.599979 kubelet[3171]: E0621 04:45:43.599869 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.600937 kubelet[3171]: E0621 04:45:43.600461 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.601106 kubelet[3171]: W0621 04:45:43.601020 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.601106 kubelet[3171]: E0621 04:45:43.601061 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.602069 kubelet[3171]: E0621 04:45:43.602054 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.602238 kubelet[3171]: W0621 04:45:43.602070 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.602238 kubelet[3171]: E0621 04:45:43.602084 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.602238 kubelet[3171]: I0621 04:45:43.602122 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f516b2d9-b3a6-47e2-926b-c9ca81ee80c8-varrun\") pod \"csi-node-driver-575w9\" (UID: \"f516b2d9-b3a6-47e2-926b-c9ca81ee80c8\") " pod="calico-system/csi-node-driver-575w9" Jun 21 04:45:43.602573 kubelet[3171]: E0621 04:45:43.602529 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.602573 kubelet[3171]: W0621 04:45:43.602540 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.602573 kubelet[3171]: E0621 04:45:43.602556 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.602893 kubelet[3171]: I0621 04:45:43.602573 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f516b2d9-b3a6-47e2-926b-c9ca81ee80c8-socket-dir\") pod \"csi-node-driver-575w9\" (UID: \"f516b2d9-b3a6-47e2-926b-c9ca81ee80c8\") " pod="calico-system/csi-node-driver-575w9" Jun 21 04:45:43.603123 kubelet[3171]: E0621 04:45:43.602920 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.603123 kubelet[3171]: W0621 04:45:43.602930 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.603123 kubelet[3171]: E0621 04:45:43.602944 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.603123 kubelet[3171]: I0621 04:45:43.602963 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgkc\" (UniqueName: \"kubernetes.io/projected/f516b2d9-b3a6-47e2-926b-c9ca81ee80c8-kube-api-access-jhgkc\") pod \"csi-node-driver-575w9\" (UID: \"f516b2d9-b3a6-47e2-926b-c9ca81ee80c8\") " pod="calico-system/csi-node-driver-575w9" Jun 21 04:45:43.603401 kubelet[3171]: E0621 04:45:43.603281 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.603401 kubelet[3171]: W0621 04:45:43.603291 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.603401 kubelet[3171]: E0621 04:45:43.603304 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.603641 kubelet[3171]: E0621 04:45:43.603558 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.603641 kubelet[3171]: W0621 04:45:43.603568 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.603641 kubelet[3171]: E0621 04:45:43.603579 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.603921 kubelet[3171]: E0621 04:45:43.603846 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.603921 kubelet[3171]: W0621 04:45:43.603856 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.603921 kubelet[3171]: E0621 04:45:43.603871 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.604292 kubelet[3171]: E0621 04:45:43.604214 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.604292 kubelet[3171]: W0621 04:45:43.604225 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.604504 kubelet[3171]: E0621 04:45:43.604368 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.604966 kubelet[3171]: E0621 04:45:43.604914 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.604966 kubelet[3171]: W0621 04:45:43.604925 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.604966 kubelet[3171]: E0621 04:45:43.604937 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.605068 kubelet[3171]: E0621 04:45:43.605048 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.605068 kubelet[3171]: W0621 04:45:43.605053 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.605068 kubelet[3171]: E0621 04:45:43.605060 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.606085 containerd[1744]: time="2025-06-21T04:45:43.605900250Z" level=info msg="connecting to shim 17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa" address="unix:///run/containerd/s/eadf8e2dcce1925f1c2fbf386ee8867f6cb098c150be3bc348bf4e1e329b123e" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:43.627280 systemd[1]: Started cri-containerd-17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa.scope - libcontainer container 17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa. Jun 21 04:45:43.674895 containerd[1744]: time="2025-06-21T04:45:43.674833933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h574b,Uid:81a56cce-38e3-4a40-84c1-a5c98865865d,Namespace:calico-system,Attempt:0,} returns sandbox id \"17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa\"" Jun 21 04:45:43.703946 kubelet[3171]: E0621 04:45:43.703930 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.703946 kubelet[3171]: W0621 04:45:43.703945 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.704285 kubelet[3171]: E0621 04:45:43.703959 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.704285 kubelet[3171]: E0621 04:45:43.704190 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.704285 kubelet[3171]: W0621 04:45:43.704198 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.704285 kubelet[3171]: E0621 04:45:43.704213 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.704441 kubelet[3171]: E0621 04:45:43.704413 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.704441 kubelet[3171]: W0621 04:45:43.704419 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.704441 kubelet[3171]: E0621 04:45:43.704430 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.704708 kubelet[3171]: E0621 04:45:43.704695 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.704708 kubelet[3171]: W0621 04:45:43.704708 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.704778 kubelet[3171]: E0621 04:45:43.704717 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.705266 kubelet[3171]: E0621 04:45:43.705251 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.705266 kubelet[3171]: W0621 04:45:43.705265 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.705406 kubelet[3171]: E0621 04:45:43.705385 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.705802 kubelet[3171]: E0621 04:45:43.705776 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.705802 kubelet[3171]: W0621 04:45:43.705788 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.705963 kubelet[3171]: E0621 04:45:43.705904 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.706146 kubelet[3171]: E0621 04:45:43.706123 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.706385 kubelet[3171]: W0621 04:45:43.706133 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.706578 kubelet[3171]: E0621 04:45:43.706541 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.706680 kubelet[3171]: E0621 04:45:43.706662 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.706680 kubelet[3171]: W0621 04:45:43.706671 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.706810 kubelet[3171]: E0621 04:45:43.706792 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.706971 kubelet[3171]: E0621 04:45:43.706952 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.706971 kubelet[3171]: W0621 04:45:43.706960 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.707383 kubelet[3171]: E0621 04:45:43.707355 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.707542 kubelet[3171]: E0621 04:45:43.707522 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.707542 kubelet[3171]: W0621 04:45:43.707531 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.707672 kubelet[3171]: E0621 04:45:43.707664 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.707858 kubelet[3171]: E0621 04:45:43.707797 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.707858 kubelet[3171]: W0621 04:45:43.707804 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.707985 kubelet[3171]: E0621 04:45:43.707974 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.708454 kubelet[3171]: E0621 04:45:43.708161 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.708454 kubelet[3171]: W0621 04:45:43.708168 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.708594 kubelet[3171]: E0621 04:45:43.708583 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.708765 kubelet[3171]: E0621 04:45:43.708757 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.708822 kubelet[3171]: W0621 04:45:43.708803 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.708892 kubelet[3171]: E0621 04:45:43.708882 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.709198 kubelet[3171]: E0621 04:45:43.709171 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.709198 kubelet[3171]: W0621 04:45:43.709183 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.709426 kubelet[3171]: E0621 04:45:43.709370 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.709616 kubelet[3171]: E0621 04:45:43.709605 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.709656 kubelet[3171]: W0621 04:45:43.709617 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.709780 kubelet[3171]: E0621 04:45:43.709764 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.709855 kubelet[3171]: E0621 04:45:43.709815 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.709855 kubelet[3171]: W0621 04:45:43.709821 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.709855 kubelet[3171]: E0621 04:45:43.709837 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.709953 kubelet[3171]: E0621 04:45:43.709945 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.709978 kubelet[3171]: W0621 04:45:43.709954 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.709978 kubelet[3171]: E0621 04:45:43.709966 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.710117 kubelet[3171]: E0621 04:45:43.710111 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.710209 kubelet[3171]: W0621 04:45:43.710117 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.710209 kubelet[3171]: E0621 04:45:43.710163 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.710262 kubelet[3171]: E0621 04:45:43.710257 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.710284 kubelet[3171]: W0621 04:45:43.710262 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.710309 kubelet[3171]: E0621 04:45:43.710287 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.710414 kubelet[3171]: E0621 04:45:43.710357 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.710414 kubelet[3171]: W0621 04:45:43.710363 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.710414 kubelet[3171]: E0621 04:45:43.710371 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.710497 kubelet[3171]: E0621 04:45:43.710487 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.710519 kubelet[3171]: W0621 04:45:43.710496 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.710519 kubelet[3171]: E0621 04:45:43.710503 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.710650 kubelet[3171]: E0621 04:45:43.710595 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.710650 kubelet[3171]: W0621 04:45:43.710600 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.710650 kubelet[3171]: E0621 04:45:43.710613 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.710718 kubelet[3171]: E0621 04:45:43.710704 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.710718 kubelet[3171]: W0621 04:45:43.710709 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.710763 kubelet[3171]: E0621 04:45:43.710720 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.710839 kubelet[3171]: E0621 04:45:43.710830 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.710869 kubelet[3171]: W0621 04:45:43.710840 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.710869 kubelet[3171]: E0621 04:45:43.710852 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.710964 kubelet[3171]: E0621 04:45:43.710956 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.710987 kubelet[3171]: W0621 04:45:43.710964 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.710987 kubelet[3171]: E0621 04:45:43.710971 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:43.721270 kubelet[3171]: E0621 04:45:43.721255 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:43.721270 kubelet[3171]: W0621 04:45:43.721267 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:43.721344 kubelet[3171]: E0621 04:45:43.721278 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:44.682003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378700430.mount: Deactivated successfully. Jun 21 04:45:45.112715 kubelet[3171]: E0621 04:45:45.112610 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575w9" podUID="f516b2d9-b3a6-47e2-926b-c9ca81ee80c8" Jun 21 04:45:45.526658 containerd[1744]: time="2025-06-21T04:45:45.526622665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:45.528574 containerd[1744]: time="2025-06-21T04:45:45.528538347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 21 04:45:45.530887 containerd[1744]: time="2025-06-21T04:45:45.530848278Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:45.533951 containerd[1744]: time="2025-06-21T04:45:45.533914817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:45.534316 containerd[1744]: time="2025-06-21T04:45:45.534178843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 2.152203354s" Jun 21 04:45:45.534316 containerd[1744]: time="2025-06-21T04:45:45.534206154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 21 04:45:45.535626 containerd[1744]: time="2025-06-21T04:45:45.535284932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 21 04:45:45.546400 containerd[1744]: time="2025-06-21T04:45:45.545131356Z" level=info msg="CreateContainer within sandbox \"592cc408f22ff7aaed3bc72015c00bde245ab2dae18b971e57b0f79707d8b606\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 21 04:45:45.569155 containerd[1744]: time="2025-06-21T04:45:45.568434469Z" level=info msg="Container ad4e1b2844c230cd908317661743185bc8a50c99afb4c4d01b099d9a272c840b: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:45.587600 containerd[1744]: time="2025-06-21T04:45:45.587578022Z" level=info msg="CreateContainer within sandbox \"592cc408f22ff7aaed3bc72015c00bde245ab2dae18b971e57b0f79707d8b606\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ad4e1b2844c230cd908317661743185bc8a50c99afb4c4d01b099d9a272c840b\"" Jun 21 04:45:45.587989 containerd[1744]: time="2025-06-21T04:45:45.587915949Z" level=info msg="StartContainer for \"ad4e1b2844c230cd908317661743185bc8a50c99afb4c4d01b099d9a272c840b\"" Jun 21 04:45:45.591155 containerd[1744]: time="2025-06-21T04:45:45.590487176Z" level=info msg="connecting to shim ad4e1b2844c230cd908317661743185bc8a50c99afb4c4d01b099d9a272c840b" address="unix:///run/containerd/s/320bc6b93544d1c5c964260be5538098ce1829b117e981f8ec69b48a1fba376c" protocol=ttrpc version=3 Jun 21 04:45:45.616278 systemd[1]: Started cri-containerd-ad4e1b2844c230cd908317661743185bc8a50c99afb4c4d01b099d9a272c840b.scope - libcontainer container ad4e1b2844c230cd908317661743185bc8a50c99afb4c4d01b099d9a272c840b. Jun 21 04:45:45.655572 containerd[1744]: time="2025-06-21T04:45:45.655547828Z" level=info msg="StartContainer for \"ad4e1b2844c230cd908317661743185bc8a50c99afb4c4d01b099d9a272c840b\" returns successfully" Jun 21 04:45:46.200424 kubelet[3171]: I0621 04:45:46.200374 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75cd7dd87b-s5ztc" podStartSLOduration=2.047201618 podStartE2EDuration="4.20035937s" podCreationTimestamp="2025-06-21 04:45:42 +0000 UTC" firstStartedPulling="2025-06-21 04:45:43.381690624 +0000 UTC m=+13.364565471" lastFinishedPulling="2025-06-21 04:45:45.534848383 +0000 UTC m=+15.517723223" observedRunningTime="2025-06-21 04:45:46.199781463 +0000 UTC m=+16.182656320" watchObservedRunningTime="2025-06-21 04:45:46.20035937 +0000 UTC m=+16.183234225" Jun 21 04:45:46.208177 kubelet[3171]: E0621 04:45:46.208157 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208177 kubelet[3171]: W0621 04:45:46.208173 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208301 kubelet[3171]: E0621 04:45:46.208190 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.208301 kubelet[3171]: E0621 04:45:46.208289 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208301 kubelet[3171]: W0621 04:45:46.208295 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208395 kubelet[3171]: E0621 04:45:46.208302 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.208395 kubelet[3171]: E0621 04:45:46.208382 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208395 kubelet[3171]: W0621 04:45:46.208386 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208395 kubelet[3171]: E0621 04:45:46.208392 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.208508 kubelet[3171]: E0621 04:45:46.208501 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208508 kubelet[3171]: W0621 04:45:46.208506 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208566 kubelet[3171]: E0621 04:45:46.208513 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.208605 kubelet[3171]: E0621 04:45:46.208598 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208605 kubelet[3171]: W0621 04:45:46.208604 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208661 kubelet[3171]: E0621 04:45:46.208610 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.208681 kubelet[3171]: E0621 04:45:46.208678 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208713 kubelet[3171]: W0621 04:45:46.208682 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208713 kubelet[3171]: E0621 04:45:46.208688 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.208763 kubelet[3171]: E0621 04:45:46.208761 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208799 kubelet[3171]: W0621 04:45:46.208766 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208799 kubelet[3171]: E0621 04:45:46.208771 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.208845 kubelet[3171]: E0621 04:45:46.208840 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208845 kubelet[3171]: W0621 04:45:46.208844 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208904 kubelet[3171]: E0621 04:45:46.208849 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.208927 kubelet[3171]: E0621 04:45:46.208919 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.208927 kubelet[3171]: W0621 04:45:46.208923 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.208982 kubelet[3171]: E0621 04:45:46.208929 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.209025 kubelet[3171]: E0621 04:45:46.208993 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.209025 kubelet[3171]: W0621 04:45:46.208997 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.209025 kubelet[3171]: E0621 04:45:46.209002 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.209108 kubelet[3171]: E0621 04:45:46.209065 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.209108 kubelet[3171]: W0621 04:45:46.209069 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.209108 kubelet[3171]: E0621 04:45:46.209074 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.209197 kubelet[3171]: E0621 04:45:46.209145 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.209197 kubelet[3171]: W0621 04:45:46.209151 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.209197 kubelet[3171]: E0621 04:45:46.209156 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.209299 kubelet[3171]: E0621 04:45:46.209236 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.209299 kubelet[3171]: W0621 04:45:46.209241 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.209299 kubelet[3171]: E0621 04:45:46.209246 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.209385 kubelet[3171]: E0621 04:45:46.209313 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.209385 kubelet[3171]: W0621 04:45:46.209318 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.209385 kubelet[3171]: E0621 04:45:46.209323 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.209470 kubelet[3171]: E0621 04:45:46.209394 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.209470 kubelet[3171]: W0621 04:45:46.209398 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.209470 kubelet[3171]: E0621 04:45:46.209403 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.223824 kubelet[3171]: E0621 04:45:46.223807 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.223824 kubelet[3171]: W0621 04:45:46.223820 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.223923 kubelet[3171]: E0621 04:45:46.223834 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.223962 kubelet[3171]: E0621 04:45:46.223957 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.223989 kubelet[3171]: W0621 04:45:46.223963 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.223989 kubelet[3171]: E0621 04:45:46.223970 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.224126 kubelet[3171]: E0621 04:45:46.224096 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.224175 kubelet[3171]: W0621 04:45:46.224122 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.224175 kubelet[3171]: E0621 04:45:46.224156 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.224311 kubelet[3171]: E0621 04:45:46.224285 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.224311 kubelet[3171]: W0621 04:45:46.224310 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.224377 kubelet[3171]: E0621 04:45:46.224320 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.224404 kubelet[3171]: E0621 04:45:46.224401 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.224404 kubelet[3171]: W0621 04:45:46.224406 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.224404 kubelet[3171]: E0621 04:45:46.224413 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.224518 kubelet[3171]: E0621 04:45:46.224508 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.224518 kubelet[3171]: W0621 04:45:46.224514 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.224561 kubelet[3171]: E0621 04:45:46.224526 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.224629 kubelet[3171]: E0621 04:45:46.224615 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.224629 kubelet[3171]: W0621 04:45:46.224622 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.224680 kubelet[3171]: E0621 04:45:46.224633 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.224760 kubelet[3171]: E0621 04:45:46.224750 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.224760 kubelet[3171]: W0621 04:45:46.224758 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.224810 kubelet[3171]: E0621 04:45:46.224771 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.224946 kubelet[3171]: E0621 04:45:46.224924 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.224946 kubelet[3171]: W0621 04:45:46.224945 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.224987 kubelet[3171]: E0621 04:45:46.224959 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.225147 kubelet[3171]: E0621 04:45:46.225095 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.225147 kubelet[3171]: W0621 04:45:46.225101 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.225147 kubelet[3171]: E0621 04:45:46.225108 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.225222 kubelet[3171]: E0621 04:45:46.225201 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.225222 kubelet[3171]: W0621 04:45:46.225206 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.225222 kubelet[3171]: E0621 04:45:46.225212 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.225321 kubelet[3171]: E0621 04:45:46.225308 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.225321 kubelet[3171]: W0621 04:45:46.225317 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.225477 kubelet[3171]: E0621 04:45:46.225328 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.225550 kubelet[3171]: E0621 04:45:46.225542 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.225572 kubelet[3171]: W0621 04:45:46.225567 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.225594 kubelet[3171]: E0621 04:45:46.225587 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.225760 kubelet[3171]: E0621 04:45:46.225736 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.225760 kubelet[3171]: W0621 04:45:46.225758 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.225817 kubelet[3171]: E0621 04:45:46.225768 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.225887 kubelet[3171]: E0621 04:45:46.225864 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.225887 kubelet[3171]: W0621 04:45:46.225884 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.225938 kubelet[3171]: E0621 04:45:46.225896 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.226067 kubelet[3171]: E0621 04:45:46.226058 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.226067 kubelet[3171]: W0621 04:45:46.226064 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.226124 kubelet[3171]: E0621 04:45:46.226079 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.226263 kubelet[3171]: E0621 04:45:46.226224 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.226263 kubelet[3171]: W0621 04:45:46.226231 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.226263 kubelet[3171]: E0621 04:45:46.226238 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.226362 kubelet[3171]: E0621 04:45:46.226349 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 04:45:46.226362 kubelet[3171]: W0621 04:45:46.226358 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 04:45:46.226415 kubelet[3171]: E0621 04:45:46.226365 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 04:45:46.933235 containerd[1744]: time="2025-06-21T04:45:46.933202729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:46.935174 containerd[1744]: time="2025-06-21T04:45:46.935148899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 21 04:45:46.937439 containerd[1744]: time="2025-06-21T04:45:46.937395103Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:46.940577 containerd[1744]: time="2025-06-21T04:45:46.940542381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:46.940908 containerd[1744]: time="2025-06-21T04:45:46.940802187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.405491038s" Jun 21 04:45:46.940908 containerd[1744]: time="2025-06-21T04:45:46.940831538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 21 04:45:46.942640 containerd[1744]: time="2025-06-21T04:45:46.942607420Z" level=info msg="CreateContainer within sandbox \"17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 21 04:45:46.962326 containerd[1744]: time="2025-06-21T04:45:46.962303689Z" level=info msg="Container 8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:46.978427 containerd[1744]: time="2025-06-21T04:45:46.978404813Z" level=info msg="CreateContainer within sandbox \"17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d\"" Jun 21 04:45:46.979530 containerd[1744]: time="2025-06-21T04:45:46.978704819Z" level=info msg="StartContainer for \"8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d\"" Jun 21 04:45:46.979866 containerd[1744]: time="2025-06-21T04:45:46.979845374Z" level=info msg="connecting to shim 8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d" address="unix:///run/containerd/s/eadf8e2dcce1925f1c2fbf386ee8867f6cb098c150be3bc348bf4e1e329b123e" protocol=ttrpc version=3 Jun 21 04:45:46.998272 systemd[1]: Started cri-containerd-8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d.scope - libcontainer container 8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d. Jun 21 04:45:47.028389 containerd[1744]: time="2025-06-21T04:45:47.028355457Z" level=info msg="StartContainer for \"8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d\" returns successfully" Jun 21 04:45:47.031017 systemd[1]: cri-containerd-8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d.scope: Deactivated successfully. Jun 21 04:45:47.033514 containerd[1744]: time="2025-06-21T04:45:47.033106709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d\" id:\"8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d\" pid:3835 exited_at:{seconds:1750481147 nanos:32471966}" Jun 21 04:45:47.033514 containerd[1744]: time="2025-06-21T04:45:47.033401945Z" level=info msg="received exit event container_id:\"8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d\" id:\"8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d\" pid:3835 exited_at:{seconds:1750481147 nanos:32471966}" Jun 21 04:45:47.047475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ac1355f2b5e38382dc3b4973007758d71c10bde7f2248e69ed44844ee26e46d-rootfs.mount: Deactivated successfully. Jun 21 04:45:47.113197 kubelet[3171]: E0621 04:45:47.113150 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575w9" podUID="f516b2d9-b3a6-47e2-926b-c9ca81ee80c8" Jun 21 04:45:47.192513 kubelet[3171]: I0621 04:45:47.192385 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:45:49.112973 kubelet[3171]: E0621 04:45:49.112929 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575w9" podUID="f516b2d9-b3a6-47e2-926b-c9ca81ee80c8" Jun 21 04:45:49.197158 containerd[1744]: time="2025-06-21T04:45:49.197063205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 21 04:45:51.113304 kubelet[3171]: E0621 04:45:51.113267 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575w9" podUID="f516b2d9-b3a6-47e2-926b-c9ca81ee80c8" Jun 21 04:45:52.537340 containerd[1744]: time="2025-06-21T04:45:52.537301566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:52.539521 containerd[1744]: time="2025-06-21T04:45:52.539486890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 21 04:45:52.541762 containerd[1744]: time="2025-06-21T04:45:52.541723220Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:52.544515 containerd[1744]: time="2025-06-21T04:45:52.544475933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:52.545035 containerd[1744]: time="2025-06-21T04:45:52.544759546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 3.347625409s" Jun 21 04:45:52.545035 containerd[1744]: time="2025-06-21T04:45:52.544783801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 21 04:45:52.546861 containerd[1744]: time="2025-06-21T04:45:52.546596875Z" level=info msg="CreateContainer within sandbox \"17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 21 04:45:52.562269 containerd[1744]: time="2025-06-21T04:45:52.562240994Z" level=info msg="Container 3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:52.577185 containerd[1744]: time="2025-06-21T04:45:52.577147168Z" level=info msg="CreateContainer within sandbox \"17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06\"" Jun 21 04:45:52.577588 containerd[1744]: time="2025-06-21T04:45:52.577510631Z" level=info msg="StartContainer for \"3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06\"" Jun 21 04:45:52.579057 containerd[1744]: time="2025-06-21T04:45:52.579020706Z" level=info msg="connecting to shim 3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06" address="unix:///run/containerd/s/eadf8e2dcce1925f1c2fbf386ee8867f6cb098c150be3bc348bf4e1e329b123e" protocol=ttrpc version=3 Jun 21 04:45:52.599295 systemd[1]: Started cri-containerd-3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06.scope - libcontainer container 3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06. Jun 21 04:45:52.641625 containerd[1744]: time="2025-06-21T04:45:52.641598534Z" level=info msg="StartContainer for \"3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06\" returns successfully" Jun 21 04:45:53.113315 kubelet[3171]: E0621 04:45:53.113277 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575w9" podUID="f516b2d9-b3a6-47e2-926b-c9ca81ee80c8" Jun 21 04:45:53.717734 systemd[1]: cri-containerd-3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06.scope: Deactivated successfully. Jun 21 04:45:53.718032 systemd[1]: cri-containerd-3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06.scope: Consumed 375ms CPU time, 189.7M memory peak, 171.2M written to disk. Jun 21 04:45:53.719873 containerd[1744]: time="2025-06-21T04:45:53.719816672Z" level=info msg="received exit event container_id:\"3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06\" id:\"3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06\" pid:3892 exited_at:{seconds:1750481153 nanos:719597341}" Jun 21 04:45:53.719873 containerd[1744]: time="2025-06-21T04:45:53.719848538Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06\" id:\"3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06\" pid:3892 exited_at:{seconds:1750481153 nanos:719597341}" Jun 21 04:45:53.735241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ed3507ac97a577ca4b5911d037d317d83a5f16e0f9fe29db224374a842e9f06-rootfs.mount: Deactivated successfully. Jun 21 04:45:53.753072 kubelet[3171]: I0621 04:45:53.752861 3171 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 04:45:53.785665 systemd[1]: Created slice kubepods-besteffort-pod0202b89e_2f60_4b65_b532_8b4d6e514c14.slice - libcontainer container kubepods-besteffort-pod0202b89e_2f60_4b65_b532_8b4d6e514c14.slice. Jun 21 04:45:53.819428 systemd[1]: Created slice kubepods-burstable-pod3d565bf7_19f8_49d8_b320_84e77a8d7785.slice - libcontainer container kubepods-burstable-pod3d565bf7_19f8_49d8_b320_84e77a8d7785.slice. Jun 21 04:45:53.825886 systemd[1]: Created slice kubepods-besteffort-pod868e6168_f734_44ed_91f7_c6242b05f8db.slice - libcontainer container kubepods-besteffort-pod868e6168_f734_44ed_91f7_c6242b05f8db.slice. Jun 21 04:45:53.831858 systemd[1]: Created slice kubepods-besteffort-pod98ef1438_f351_4a64_a0d7_db96dac74994.slice - libcontainer container kubepods-besteffort-pod98ef1438_f351_4a64_a0d7_db96dac74994.slice. Jun 21 04:45:53.839022 systemd[1]: Created slice kubepods-besteffort-podc47ae10e_2333_42b7_8cf2_7115ceac8e9d.slice - libcontainer container kubepods-besteffort-podc47ae10e_2333_42b7_8cf2_7115ceac8e9d.slice. Jun 21 04:45:53.843253 systemd[1]: Created slice kubepods-besteffort-pod01f187bd_0f09_46c5_9589_33497fd2bdc3.slice - libcontainer container kubepods-besteffort-pod01f187bd_0f09_46c5_9589_33497fd2bdc3.slice. Jun 21 04:45:53.850077 systemd[1]: Created slice kubepods-burstable-pod315099f2_eb90_4232_abe6_6ea3b8311656.slice - libcontainer container kubepods-burstable-pod315099f2_eb90_4232_abe6_6ea3b8311656.slice. Jun 21 04:45:53.877342 kubelet[3171]: I0621 04:45:53.877320 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkq6\" (UniqueName: \"kubernetes.io/projected/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-kube-api-access-sjkq6\") pod \"whisker-667c9c7d7d-2ngd6\" (UID: \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\") " pod="calico-system/whisker-667c9c7d7d-2ngd6" Jun 21 04:45:53.877428 kubelet[3171]: I0621 04:45:53.877411 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d565bf7-19f8-49d8-b320-84e77a8d7785-config-volume\") pod \"coredns-668d6bf9bc-x5gdr\" (UID: \"3d565bf7-19f8-49d8-b320-84e77a8d7785\") " pod="kube-system/coredns-668d6bf9bc-x5gdr" Jun 21 04:45:53.877457 kubelet[3171]: I0621 04:45:53.877439 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98ef1438-f351-4a64-a0d7-db96dac74994-calico-apiserver-certs\") pod \"calico-apiserver-768675b895-x9fxb\" (UID: \"98ef1438-f351-4a64-a0d7-db96dac74994\") " pod="calico-apiserver/calico-apiserver-768675b895-x9fxb" Jun 21 04:45:53.877479 kubelet[3171]: I0621 04:45:53.877466 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhkk\" (UniqueName: \"kubernetes.io/projected/98ef1438-f351-4a64-a0d7-db96dac74994-kube-api-access-fwhkk\") pod \"calico-apiserver-768675b895-x9fxb\" (UID: \"98ef1438-f351-4a64-a0d7-db96dac74994\") " pod="calico-apiserver/calico-apiserver-768675b895-x9fxb" Jun 21 04:45:53.877503 kubelet[3171]: I0621 04:45:53.877486 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315099f2-eb90-4232-abe6-6ea3b8311656-config-volume\") pod \"coredns-668d6bf9bc-qg8wm\" (UID: \"315099f2-eb90-4232-abe6-6ea3b8311656\") " pod="kube-system/coredns-668d6bf9bc-qg8wm" Jun 21 04:45:53.877528 kubelet[3171]: I0621 04:45:53.877505 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-whisker-backend-key-pair\") pod \"whisker-667c9c7d7d-2ngd6\" (UID: \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\") " pod="calico-system/whisker-667c9c7d7d-2ngd6" Jun 21 04:45:53.877553 kubelet[3171]: I0621 04:45:53.877524 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7r7d\" (UniqueName: \"kubernetes.io/projected/01f187bd-0f09-46c5-9589-33497fd2bdc3-kube-api-access-r7r7d\") pod \"goldmane-5bd85449d4-fpgt8\" (UID: \"01f187bd-0f09-46c5-9589-33497fd2bdc3\") " pod="calico-system/goldmane-5bd85449d4-fpgt8" Jun 21 04:45:53.877553 kubelet[3171]: I0621 04:45:53.877542 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-whisker-ca-bundle\") pod \"whisker-667c9c7d7d-2ngd6\" (UID: \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\") " pod="calico-system/whisker-667c9c7d7d-2ngd6" Jun 21 04:45:53.877599 kubelet[3171]: I0621 04:45:53.877560 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxqn\" (UniqueName: \"kubernetes.io/projected/3d565bf7-19f8-49d8-b320-84e77a8d7785-kube-api-access-nfxqn\") pod \"coredns-668d6bf9bc-x5gdr\" (UID: \"3d565bf7-19f8-49d8-b320-84e77a8d7785\") " pod="kube-system/coredns-668d6bf9bc-x5gdr" Jun 21 04:45:53.877599 kubelet[3171]: I0621 04:45:53.877580 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f187bd-0f09-46c5-9589-33497fd2bdc3-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-fpgt8\" (UID: \"01f187bd-0f09-46c5-9589-33497fd2bdc3\") " pod="calico-system/goldmane-5bd85449d4-fpgt8" Jun 21 04:45:53.877642 kubelet[3171]: I0621 04:45:53.877596 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/01f187bd-0f09-46c5-9589-33497fd2bdc3-goldmane-key-pair\") pod \"goldmane-5bd85449d4-fpgt8\" (UID: \"01f187bd-0f09-46c5-9589-33497fd2bdc3\") " pod="calico-system/goldmane-5bd85449d4-fpgt8" Jun 21 04:45:53.877642 kubelet[3171]: I0621 04:45:53.877612 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0202b89e-2f60-4b65-b532-8b4d6e514c14-tigera-ca-bundle\") pod \"calico-kube-controllers-567d4b869c-qhpx9\" (UID: \"0202b89e-2f60-4b65-b532-8b4d6e514c14\") " pod="calico-system/calico-kube-controllers-567d4b869c-qhpx9" Jun 21 04:45:53.877688 kubelet[3171]: I0621 04:45:53.877647 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f187bd-0f09-46c5-9589-33497fd2bdc3-config\") pod \"goldmane-5bd85449d4-fpgt8\" (UID: \"01f187bd-0f09-46c5-9589-33497fd2bdc3\") " pod="calico-system/goldmane-5bd85449d4-fpgt8" Jun 21 04:45:53.877688 kubelet[3171]: I0621 04:45:53.877665 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2n9v\" (UniqueName: \"kubernetes.io/projected/315099f2-eb90-4232-abe6-6ea3b8311656-kube-api-access-b2n9v\") pod \"coredns-668d6bf9bc-qg8wm\" (UID: \"315099f2-eb90-4232-abe6-6ea3b8311656\") " pod="kube-system/coredns-668d6bf9bc-qg8wm" Jun 21 04:45:53.877688 kubelet[3171]: I0621 04:45:53.877682 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzfn\" (UniqueName: \"kubernetes.io/projected/0202b89e-2f60-4b65-b532-8b4d6e514c14-kube-api-access-jnzfn\") pod \"calico-kube-controllers-567d4b869c-qhpx9\" (UID: \"0202b89e-2f60-4b65-b532-8b4d6e514c14\") " pod="calico-system/calico-kube-controllers-567d4b869c-qhpx9" Jun 21 04:45:53.877755 kubelet[3171]: I0621 04:45:53.877700 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvglc\" (UniqueName: \"kubernetes.io/projected/868e6168-f734-44ed-91f7-c6242b05f8db-kube-api-access-mvglc\") pod \"calico-apiserver-768675b895-xb6qq\" (UID: \"868e6168-f734-44ed-91f7-c6242b05f8db\") " pod="calico-apiserver/calico-apiserver-768675b895-xb6qq" Jun 21 04:45:53.877755 kubelet[3171]: I0621 04:45:53.877718 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/868e6168-f734-44ed-91f7-c6242b05f8db-calico-apiserver-certs\") pod \"calico-apiserver-768675b895-xb6qq\" (UID: \"868e6168-f734-44ed-91f7-c6242b05f8db\") " pod="calico-apiserver/calico-apiserver-768675b895-xb6qq" Jun 21 04:45:54.397849 containerd[1744]: time="2025-06-21T04:45:54.397781681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567d4b869c-qhpx9,Uid:0202b89e-2f60-4b65-b532-8b4d6e514c14,Namespace:calico-system,Attempt:0,}" Jun 21 04:45:54.423318 containerd[1744]: time="2025-06-21T04:45:54.423291307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5gdr,Uid:3d565bf7-19f8-49d8-b320-84e77a8d7785,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:54.428899 containerd[1744]: time="2025-06-21T04:45:54.428872357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768675b895-xb6qq,Uid:868e6168-f734-44ed-91f7-c6242b05f8db,Namespace:calico-apiserver,Attempt:0,}" Jun 21 04:45:54.436484 containerd[1744]: time="2025-06-21T04:45:54.436430557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768675b895-x9fxb,Uid:98ef1438-f351-4a64-a0d7-db96dac74994,Namespace:calico-apiserver,Attempt:0,}" Jun 21 04:45:54.441916 containerd[1744]: time="2025-06-21T04:45:54.441894085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-667c9c7d7d-2ngd6,Uid:c47ae10e-2333-42b7-8cf2-7115ceac8e9d,Namespace:calico-system,Attempt:0,}" Jun 21 04:45:54.448377 containerd[1744]: time="2025-06-21T04:45:54.448358418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-fpgt8,Uid:01f187bd-0f09-46c5-9589-33497fd2bdc3,Namespace:calico-system,Attempt:0,}" Jun 21 04:45:54.452824 containerd[1744]: time="2025-06-21T04:45:54.452795909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qg8wm,Uid:315099f2-eb90-4232-abe6-6ea3b8311656,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:54.804083 containerd[1744]: time="2025-06-21T04:45:54.803159886Z" level=error msg="Failed to destroy network for sandbox \"c43e011a700d89cfdde1b405d2c1bd1ed0d3a1cbd8c3ce38d9222dccddd07dcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.809589 containerd[1744]: time="2025-06-21T04:45:54.809023760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5gdr,Uid:3d565bf7-19f8-49d8-b320-84e77a8d7785,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43e011a700d89cfdde1b405d2c1bd1ed0d3a1cbd8c3ce38d9222dccddd07dcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.809390 systemd[1]: run-netns-cni\x2dc679ace6\x2d56df\x2d8dac\x2d49cb\x2de61cfcd53e1a.mount: Deactivated successfully. Jun 21 04:45:54.812079 kubelet[3171]: E0621 04:45:54.809203 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43e011a700d89cfdde1b405d2c1bd1ed0d3a1cbd8c3ce38d9222dccddd07dcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.812079 kubelet[3171]: E0621 04:45:54.809293 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43e011a700d89cfdde1b405d2c1bd1ed0d3a1cbd8c3ce38d9222dccddd07dcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5gdr" Jun 21 04:45:54.812079 kubelet[3171]: E0621 04:45:54.809314 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43e011a700d89cfdde1b405d2c1bd1ed0d3a1cbd8c3ce38d9222dccddd07dcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5gdr" Jun 21 04:45:54.813665 kubelet[3171]: E0621 04:45:54.809353 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x5gdr_kube-system(3d565bf7-19f8-49d8-b320-84e77a8d7785)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x5gdr_kube-system(3d565bf7-19f8-49d8-b320-84e77a8d7785)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c43e011a700d89cfdde1b405d2c1bd1ed0d3a1cbd8c3ce38d9222dccddd07dcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x5gdr" podUID="3d565bf7-19f8-49d8-b320-84e77a8d7785" Jun 21 04:45:54.822313 containerd[1744]: time="2025-06-21T04:45:54.822257983Z" level=error msg="Failed to destroy network for sandbox \"4308683e485372d44621c9ef49bd4f11c0882489aaa0c58b51d0c00cdb7cd355\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.825499 systemd[1]: run-netns-cni\x2d71305e54\x2deb88\x2d039c\x2df444\x2d18bec226df2c.mount: Deactivated successfully. Jun 21 04:45:54.827436 containerd[1744]: time="2025-06-21T04:45:54.827286211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567d4b869c-qhpx9,Uid:0202b89e-2f60-4b65-b532-8b4d6e514c14,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4308683e485372d44621c9ef49bd4f11c0882489aaa0c58b51d0c00cdb7cd355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.827529 kubelet[3171]: E0621 04:45:54.827430 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4308683e485372d44621c9ef49bd4f11c0882489aaa0c58b51d0c00cdb7cd355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.827529 kubelet[3171]: E0621 04:45:54.827471 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4308683e485372d44621c9ef49bd4f11c0882489aaa0c58b51d0c00cdb7cd355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567d4b869c-qhpx9" Jun 21 04:45:54.827529 kubelet[3171]: E0621 04:45:54.827489 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4308683e485372d44621c9ef49bd4f11c0882489aaa0c58b51d0c00cdb7cd355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567d4b869c-qhpx9" Jun 21 04:45:54.827606 kubelet[3171]: E0621 04:45:54.827520 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-567d4b869c-qhpx9_calico-system(0202b89e-2f60-4b65-b532-8b4d6e514c14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-567d4b869c-qhpx9_calico-system(0202b89e-2f60-4b65-b532-8b4d6e514c14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4308683e485372d44621c9ef49bd4f11c0882489aaa0c58b51d0c00cdb7cd355\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-567d4b869c-qhpx9" podUID="0202b89e-2f60-4b65-b532-8b4d6e514c14" Jun 21 04:45:54.837875 containerd[1744]: time="2025-06-21T04:45:54.836235646Z" level=error msg="Failed to destroy network for sandbox \"ec42d555b528c59f4f38619430210174fd8310da42ad31b650f280a982bb3b78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.838691 systemd[1]: run-netns-cni\x2d66b72f6f\x2d29d0\x2dbd65\x2d7068\x2df6857a40ec9a.mount: Deactivated successfully. Jun 21 04:45:54.840891 containerd[1744]: time="2025-06-21T04:45:54.840804901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768675b895-xb6qq,Uid:868e6168-f734-44ed-91f7-c6242b05f8db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec42d555b528c59f4f38619430210174fd8310da42ad31b650f280a982bb3b78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.840983 kubelet[3171]: E0621 04:45:54.840950 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec42d555b528c59f4f38619430210174fd8310da42ad31b650f280a982bb3b78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.841019 kubelet[3171]: E0621 04:45:54.840990 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec42d555b528c59f4f38619430210174fd8310da42ad31b650f280a982bb3b78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768675b895-xb6qq" Jun 21 04:45:54.841019 kubelet[3171]: E0621 04:45:54.841013 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec42d555b528c59f4f38619430210174fd8310da42ad31b650f280a982bb3b78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768675b895-xb6qq" Jun 21 04:45:54.841064 kubelet[3171]: E0621 04:45:54.841045 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768675b895-xb6qq_calico-apiserver(868e6168-f734-44ed-91f7-c6242b05f8db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768675b895-xb6qq_calico-apiserver(868e6168-f734-44ed-91f7-c6242b05f8db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec42d555b528c59f4f38619430210174fd8310da42ad31b650f280a982bb3b78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768675b895-xb6qq" podUID="868e6168-f734-44ed-91f7-c6242b05f8db" Jun 21 04:45:54.844033 containerd[1744]: time="2025-06-21T04:45:54.843949711Z" level=error msg="Failed to destroy network for sandbox \"5d86a2be8d021a9a1eb1e7834f78d5a82fb5abc1438a5693a25b0a258f07dbc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.846081 systemd[1]: run-netns-cni\x2d2b1f7b42\x2d2c60\x2d0bdb\x2d6f85\x2d5ad22efd9237.mount: Deactivated successfully. Jun 21 04:45:54.850829 containerd[1744]: time="2025-06-21T04:45:54.850781891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-fpgt8,Uid:01f187bd-0f09-46c5-9589-33497fd2bdc3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d86a2be8d021a9a1eb1e7834f78d5a82fb5abc1438a5693a25b0a258f07dbc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.851027 kubelet[3171]: E0621 04:45:54.850909 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d86a2be8d021a9a1eb1e7834f78d5a82fb5abc1438a5693a25b0a258f07dbc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.851027 kubelet[3171]: E0621 04:45:54.850947 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d86a2be8d021a9a1eb1e7834f78d5a82fb5abc1438a5693a25b0a258f07dbc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-fpgt8" Jun 21 04:45:54.851027 kubelet[3171]: E0621 04:45:54.850963 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d86a2be8d021a9a1eb1e7834f78d5a82fb5abc1438a5693a25b0a258f07dbc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-fpgt8" Jun 21 04:45:54.851107 kubelet[3171]: E0621 04:45:54.850994 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-fpgt8_calico-system(01f187bd-0f09-46c5-9589-33497fd2bdc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-fpgt8_calico-system(01f187bd-0f09-46c5-9589-33497fd2bdc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d86a2be8d021a9a1eb1e7834f78d5a82fb5abc1438a5693a25b0a258f07dbc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-fpgt8" podUID="01f187bd-0f09-46c5-9589-33497fd2bdc3" Jun 21 04:45:54.852744 containerd[1744]: time="2025-06-21T04:45:54.852681722Z" level=error msg="Failed to destroy network for sandbox \"7d22008973648f3ec7c674be4acab1ecaa896f71f256b81644600b6c509e6263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.855461 containerd[1744]: time="2025-06-21T04:45:54.855433524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-667c9c7d7d-2ngd6,Uid:c47ae10e-2333-42b7-8cf2-7115ceac8e9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d22008973648f3ec7c674be4acab1ecaa896f71f256b81644600b6c509e6263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.855719 kubelet[3171]: E0621 04:45:54.855690 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d22008973648f3ec7c674be4acab1ecaa896f71f256b81644600b6c509e6263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.855766 kubelet[3171]: E0621 04:45:54.855731 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d22008973648f3ec7c674be4acab1ecaa896f71f256b81644600b6c509e6263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-667c9c7d7d-2ngd6" Jun 21 04:45:54.855766 kubelet[3171]: E0621 04:45:54.855747 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d22008973648f3ec7c674be4acab1ecaa896f71f256b81644600b6c509e6263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-667c9c7d7d-2ngd6" Jun 21 04:45:54.855814 kubelet[3171]: E0621 04:45:54.855775 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-667c9c7d7d-2ngd6_calico-system(c47ae10e-2333-42b7-8cf2-7115ceac8e9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-667c9c7d7d-2ngd6_calico-system(c47ae10e-2333-42b7-8cf2-7115ceac8e9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d22008973648f3ec7c674be4acab1ecaa896f71f256b81644600b6c509e6263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-667c9c7d7d-2ngd6" podUID="c47ae10e-2333-42b7-8cf2-7115ceac8e9d" Jun 21 04:45:54.857535 containerd[1744]: time="2025-06-21T04:45:54.857436735Z" level=error msg="Failed to destroy network for sandbox \"9766e874928b905a9c7dd1f1cd8149cf68e3efd11fb27b17fae24ae0be763517\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.860321 containerd[1744]: time="2025-06-21T04:45:54.860292859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768675b895-x9fxb,Uid:98ef1438-f351-4a64-a0d7-db96dac74994,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9766e874928b905a9c7dd1f1cd8149cf68e3efd11fb27b17fae24ae0be763517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.860934 kubelet[3171]: E0621 04:45:54.860532 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9766e874928b905a9c7dd1f1cd8149cf68e3efd11fb27b17fae24ae0be763517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.860934 kubelet[3171]: E0621 04:45:54.860565 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9766e874928b905a9c7dd1f1cd8149cf68e3efd11fb27b17fae24ae0be763517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768675b895-x9fxb" Jun 21 04:45:54.860934 kubelet[3171]: E0621 04:45:54.860581 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9766e874928b905a9c7dd1f1cd8149cf68e3efd11fb27b17fae24ae0be763517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768675b895-x9fxb" Jun 21 04:45:54.861044 kubelet[3171]: E0621 04:45:54.860618 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768675b895-x9fxb_calico-apiserver(98ef1438-f351-4a64-a0d7-db96dac74994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768675b895-x9fxb_calico-apiserver(98ef1438-f351-4a64-a0d7-db96dac74994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9766e874928b905a9c7dd1f1cd8149cf68e3efd11fb27b17fae24ae0be763517\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768675b895-x9fxb" podUID="98ef1438-f351-4a64-a0d7-db96dac74994" Jun 21 04:45:54.861099 containerd[1744]: time="2025-06-21T04:45:54.861062561Z" level=error msg="Failed to destroy network for sandbox \"c54302bb98bb76b66dfb5a68d9d62850a2c18cb23627adae72f811ac56b90fcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.863494 containerd[1744]: time="2025-06-21T04:45:54.863467261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qg8wm,Uid:315099f2-eb90-4232-abe6-6ea3b8311656,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c54302bb98bb76b66dfb5a68d9d62850a2c18cb23627adae72f811ac56b90fcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.863607 kubelet[3171]: E0621 04:45:54.863589 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c54302bb98bb76b66dfb5a68d9d62850a2c18cb23627adae72f811ac56b90fcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:54.863641 kubelet[3171]: E0621 04:45:54.863622 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c54302bb98bb76b66dfb5a68d9d62850a2c18cb23627adae72f811ac56b90fcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qg8wm" Jun 21 04:45:54.863666 kubelet[3171]: E0621 04:45:54.863638 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c54302bb98bb76b66dfb5a68d9d62850a2c18cb23627adae72f811ac56b90fcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qg8wm" Jun 21 04:45:54.863687 kubelet[3171]: E0621 04:45:54.863667 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qg8wm_kube-system(315099f2-eb90-4232-abe6-6ea3b8311656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qg8wm_kube-system(315099f2-eb90-4232-abe6-6ea3b8311656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c54302bb98bb76b66dfb5a68d9d62850a2c18cb23627adae72f811ac56b90fcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qg8wm" podUID="315099f2-eb90-4232-abe6-6ea3b8311656" Jun 21 04:45:55.116980 systemd[1]: Created slice kubepods-besteffort-podf516b2d9_b3a6_47e2_926b_c9ca81ee80c8.slice - libcontainer container kubepods-besteffort-podf516b2d9_b3a6_47e2_926b_c9ca81ee80c8.slice. Jun 21 04:45:55.119380 containerd[1744]: time="2025-06-21T04:45:55.119358276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-575w9,Uid:f516b2d9-b3a6-47e2-926b-c9ca81ee80c8,Namespace:calico-system,Attempt:0,}" Jun 21 04:45:55.155735 containerd[1744]: time="2025-06-21T04:45:55.155704632Z" level=error msg="Failed to destroy network for sandbox \"a3a7cb35d9a59d4ed77423d5ed5accc4dc463bb32ceb134cda9929f51272cad7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:55.158379 containerd[1744]: time="2025-06-21T04:45:55.158351535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-575w9,Uid:f516b2d9-b3a6-47e2-926b-c9ca81ee80c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a7cb35d9a59d4ed77423d5ed5accc4dc463bb32ceb134cda9929f51272cad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:55.158552 kubelet[3171]: E0621 04:45:55.158509 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a7cb35d9a59d4ed77423d5ed5accc4dc463bb32ceb134cda9929f51272cad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 04:45:55.158611 kubelet[3171]: E0621 04:45:55.158564 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a7cb35d9a59d4ed77423d5ed5accc4dc463bb32ceb134cda9929f51272cad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-575w9" Jun 21 04:45:55.158611 kubelet[3171]: E0621 04:45:55.158581 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a7cb35d9a59d4ed77423d5ed5accc4dc463bb32ceb134cda9929f51272cad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-575w9" Jun 21 04:45:55.158656 kubelet[3171]: E0621 04:45:55.158616 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-575w9_calico-system(f516b2d9-b3a6-47e2-926b-c9ca81ee80c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-575w9_calico-system(f516b2d9-b3a6-47e2-926b-c9ca81ee80c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3a7cb35d9a59d4ed77423d5ed5accc4dc463bb32ceb134cda9929f51272cad7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-575w9" podUID="f516b2d9-b3a6-47e2-926b-c9ca81ee80c8" Jun 21 04:45:55.209659 containerd[1744]: time="2025-06-21T04:45:55.209615290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 21 04:45:55.736024 systemd[1]: run-netns-cni\x2df0d9f834\x2dd49c\x2d5f0f\x2d9ba1\x2d911b198fd9e7.mount: Deactivated successfully. Jun 21 04:45:55.736161 systemd[1]: run-netns-cni\x2da88fb278\x2df189\x2d32c3\x2d0098\x2d8253632f4acb.mount: Deactivated successfully. Jun 21 04:45:55.736234 systemd[1]: run-netns-cni\x2da4a70bdd\x2dd66b\x2d6c37\x2da3c6\x2dd78adf2aadee.mount: Deactivated successfully. Jun 21 04:46:01.651787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213733569.mount: Deactivated successfully. Jun 21 04:46:01.677551 containerd[1744]: time="2025-06-21T04:46:01.677511374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:01.679635 containerd[1744]: time="2025-06-21T04:46:01.679607794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 21 04:46:01.681880 containerd[1744]: time="2025-06-21T04:46:01.681830078Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:01.684658 containerd[1744]: time="2025-06-21T04:46:01.684616438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:01.684885 containerd[1744]: time="2025-06-21T04:46:01.684865978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 6.475221258s" Jun 21 04:46:01.684948 containerd[1744]: time="2025-06-21T04:46:01.684938344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 21 04:46:01.698376 containerd[1744]: time="2025-06-21T04:46:01.698344590Z" level=info msg="CreateContainer within sandbox \"17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 21 04:46:01.729511 containerd[1744]: time="2025-06-21T04:46:01.729174956Z" level=info msg="Container bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:01.744201 containerd[1744]: time="2025-06-21T04:46:01.744177519Z" level=info msg="CreateContainer within sandbox \"17a192251650a3b9528aa30f7592edfe82b7cf27bfc5d32cc6ca36c9718245aa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\"" Jun 21 04:46:01.744731 containerd[1744]: time="2025-06-21T04:46:01.744504283Z" level=info msg="StartContainer for \"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\"" Jun 21 04:46:01.745827 containerd[1744]: time="2025-06-21T04:46:01.745798561Z" level=info msg="connecting to shim bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e" address="unix:///run/containerd/s/eadf8e2dcce1925f1c2fbf386ee8867f6cb098c150be3bc348bf4e1e329b123e" protocol=ttrpc version=3 Jun 21 04:46:01.765261 systemd[1]: Started cri-containerd-bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e.scope - libcontainer container bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e. Jun 21 04:46:01.800770 containerd[1744]: time="2025-06-21T04:46:01.800746806Z" level=info msg="StartContainer for \"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\" returns successfully" Jun 21 04:46:02.029157 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 21 04:46:02.029223 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 21 04:46:02.223357 kubelet[3171]: I0621 04:46:02.222375 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-whisker-backend-key-pair\") pod \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\" (UID: \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\") " Jun 21 04:46:02.223357 kubelet[3171]: I0621 04:46:02.222412 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjkq6\" (UniqueName: \"kubernetes.io/projected/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-kube-api-access-sjkq6\") pod \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\" (UID: \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\") " Jun 21 04:46:02.223357 kubelet[3171]: I0621 04:46:02.222432 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-whisker-ca-bundle\") pod \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\" (UID: \"c47ae10e-2333-42b7-8cf2-7115ceac8e9d\") " Jun 21 04:46:02.223357 kubelet[3171]: I0621 04:46:02.222714 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c47ae10e-2333-42b7-8cf2-7115ceac8e9d" (UID: "c47ae10e-2333-42b7-8cf2-7115ceac8e9d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 04:46:02.227504 kubelet[3171]: I0621 04:46:02.227473 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c47ae10e-2333-42b7-8cf2-7115ceac8e9d" (UID: "c47ae10e-2333-42b7-8cf2-7115ceac8e9d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 04:46:02.227770 kubelet[3171]: I0621 04:46:02.227748 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-kube-api-access-sjkq6" (OuterVolumeSpecName: "kube-api-access-sjkq6") pod "c47ae10e-2333-42b7-8cf2-7115ceac8e9d" (UID: "c47ae10e-2333-42b7-8cf2-7115ceac8e9d"). InnerVolumeSpecName "kube-api-access-sjkq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 04:46:02.253162 kubelet[3171]: I0621 04:46:02.253042 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h574b" podStartSLOduration=1.243141606 podStartE2EDuration="19.253027469s" podCreationTimestamp="2025-06-21 04:45:43 +0000 UTC" firstStartedPulling="2025-06-21 04:45:43.675592561 +0000 UTC m=+13.658467413" lastFinishedPulling="2025-06-21 04:46:01.685478424 +0000 UTC m=+31.668353276" observedRunningTime="2025-06-21 04:46:02.252501629 +0000 UTC m=+32.235376483" watchObservedRunningTime="2025-06-21 04:46:02.253027469 +0000 UTC m=+32.235902322" Jun 21 04:46:02.323693 kubelet[3171]: I0621 04:46:02.323514 3171 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjkq6\" (UniqueName: \"kubernetes.io/projected/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-kube-api-access-sjkq6\") on node \"ci-4372.0.0-a-c1262e9e80\" DevicePath \"\"" Jun 21 04:46:02.323693 kubelet[3171]: I0621 04:46:02.323539 3171 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-whisker-ca-bundle\") on node \"ci-4372.0.0-a-c1262e9e80\" DevicePath \"\"" Jun 21 04:46:02.323693 kubelet[3171]: I0621 04:46:02.323548 3171 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c47ae10e-2333-42b7-8cf2-7115ceac8e9d-whisker-backend-key-pair\") on node \"ci-4372.0.0-a-c1262e9e80\" DevicePath \"\"" Jun 21 04:46:02.529806 systemd[1]: Removed slice kubepods-besteffort-podc47ae10e_2333_42b7_8cf2_7115ceac8e9d.slice - libcontainer container kubepods-besteffort-podc47ae10e_2333_42b7_8cf2_7115ceac8e9d.slice. Jun 21 04:46:02.584935 systemd[1]: Created slice kubepods-besteffort-podcf80cf72_4775_4fd4_a74e_e4bae54a36dd.slice - libcontainer container kubepods-besteffort-podcf80cf72_4775_4fd4_a74e_e4bae54a36dd.slice. Jun 21 04:46:02.625356 kubelet[3171]: I0621 04:46:02.625331 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwr2p\" (UniqueName: \"kubernetes.io/projected/cf80cf72-4775-4fd4-a74e-e4bae54a36dd-kube-api-access-rwr2p\") pod \"whisker-776d4dccf9-xzldh\" (UID: \"cf80cf72-4775-4fd4-a74e-e4bae54a36dd\") " pod="calico-system/whisker-776d4dccf9-xzldh" Jun 21 04:46:02.625428 kubelet[3171]: I0621 04:46:02.625381 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf80cf72-4775-4fd4-a74e-e4bae54a36dd-whisker-backend-key-pair\") pod \"whisker-776d4dccf9-xzldh\" (UID: \"cf80cf72-4775-4fd4-a74e-e4bae54a36dd\") " pod="calico-system/whisker-776d4dccf9-xzldh" Jun 21 04:46:02.625428 kubelet[3171]: I0621 04:46:02.625404 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf80cf72-4775-4fd4-a74e-e4bae54a36dd-whisker-ca-bundle\") pod \"whisker-776d4dccf9-xzldh\" (UID: \"cf80cf72-4775-4fd4-a74e-e4bae54a36dd\") " pod="calico-system/whisker-776d4dccf9-xzldh" Jun 21 04:46:02.651709 systemd[1]: var-lib-kubelet-pods-c47ae10e\x2d2333\x2d42b7\x2d8cf2\x2d7115ceac8e9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjkq6.mount: Deactivated successfully. Jun 21 04:46:02.651796 systemd[1]: var-lib-kubelet-pods-c47ae10e\x2d2333\x2d42b7\x2d8cf2\x2d7115ceac8e9d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 21 04:46:02.888802 containerd[1744]: time="2025-06-21T04:46:02.888720413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-776d4dccf9-xzldh,Uid:cf80cf72-4775-4fd4-a74e-e4bae54a36dd,Namespace:calico-system,Attempt:0,}" Jun 21 04:46:02.987256 systemd-networkd[1362]: cali2f7123eed1a: Link UP Jun 21 04:46:02.987424 systemd-networkd[1362]: cali2f7123eed1a: Gained carrier Jun 21 04:46:03.000901 containerd[1744]: 2025-06-21 04:46:02.910 [INFO][4224] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 04:46:03.000901 containerd[1744]: 2025-06-21 04:46:02.918 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0 whisker-776d4dccf9- calico-system cf80cf72-4775-4fd4-a74e-e4bae54a36dd 902 0 2025-06-21 04:46:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:776d4dccf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.0-a-c1262e9e80 whisker-776d4dccf9-xzldh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2f7123eed1a [] [] }} ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Namespace="calico-system" Pod="whisker-776d4dccf9-xzldh" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-" Jun 21 04:46:03.000901 containerd[1744]: 2025-06-21 04:46:02.918 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Namespace="calico-system" Pod="whisker-776d4dccf9-xzldh" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" Jun 21 04:46:03.000901 containerd[1744]: 2025-06-21 04:46:02.937 [INFO][4237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" HandleID="k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Workload="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.937 [INFO][4237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" HandleID="k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Workload="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-a-c1262e9e80", "pod":"whisker-776d4dccf9-xzldh", "timestamp":"2025-06-21 04:46:02.937670646 +0000 UTC"}, Hostname:"ci-4372.0.0-a-c1262e9e80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.937 [INFO][4237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.937 [INFO][4237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.937 [INFO][4237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-a-c1262e9e80' Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.942 [INFO][4237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.945 [INFO][4237] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.949 [INFO][4237] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.950 [INFO][4237] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001120 containerd[1744]: 2025-06-21 04:46:02.952 [INFO][4237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001391 containerd[1744]: 2025-06-21 04:46:02.952 [INFO][4237] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001391 containerd[1744]: 2025-06-21 04:46:02.953 [INFO][4237] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659 Jun 21 04:46:03.001391 containerd[1744]: 2025-06-21 04:46:02.956 [INFO][4237] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001391 containerd[1744]: 2025-06-21 04:46:02.962 [INFO][4237] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.65/26] block=192.168.82.64/26 handle="k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001391 containerd[1744]: 2025-06-21 04:46:02.962 [INFO][4237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.65/26] handle="k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:03.001391 containerd[1744]: 2025-06-21 04:46:02.962 [INFO][4237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 04:46:03.001391 containerd[1744]: 2025-06-21 04:46:02.962 [INFO][4237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.65/26] IPv6=[] ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" HandleID="k8s-pod-network.a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Workload="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" Jun 21 04:46:03.001552 containerd[1744]: 2025-06-21 04:46:02.964 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Namespace="calico-system" Pod="whisker-776d4dccf9-xzldh" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0", GenerateName:"whisker-776d4dccf9-", Namespace:"calico-system", SelfLink:"", UID:"cf80cf72-4775-4fd4-a74e-e4bae54a36dd", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"776d4dccf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"", Pod:"whisker-776d4dccf9-xzldh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f7123eed1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:03.001552 containerd[1744]: 2025-06-21 04:46:02.965 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.65/32] ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Namespace="calico-system" Pod="whisker-776d4dccf9-xzldh" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" Jun 21 04:46:03.001653 containerd[1744]: 2025-06-21 04:46:02.965 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f7123eed1a ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Namespace="calico-system" Pod="whisker-776d4dccf9-xzldh" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" Jun 21 04:46:03.001653 containerd[1744]: 2025-06-21 04:46:02.987 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Namespace="calico-system" Pod="whisker-776d4dccf9-xzldh" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" Jun 21 04:46:03.001701 containerd[1744]: 2025-06-21 04:46:02.987 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Namespace="calico-system" Pod="whisker-776d4dccf9-xzldh" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0", GenerateName:"whisker-776d4dccf9-", Namespace:"calico-system", SelfLink:"", UID:"cf80cf72-4775-4fd4-a74e-e4bae54a36dd", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"776d4dccf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659", Pod:"whisker-776d4dccf9-xzldh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f7123eed1a", MAC:"a6:ab:f7:e7:89:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:03.001755 containerd[1744]: 2025-06-21 04:46:02.999 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" Namespace="calico-system" Pod="whisker-776d4dccf9-xzldh" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-whisker--776d4dccf9--xzldh-eth0" Jun 21 04:46:03.032493 containerd[1744]: time="2025-06-21T04:46:03.032455288Z" level=info msg="connecting to shim a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659" address="unix:///run/containerd/s/9291197f2cabc5b8453c10888844e385880d2c649155aef6a741e985ea41a43b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:03.050277 systemd[1]: Started cri-containerd-a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659.scope - libcontainer container a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659. Jun 21 04:46:03.083898 containerd[1744]: time="2025-06-21T04:46:03.083874024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-776d4dccf9-xzldh,Uid:cf80cf72-4775-4fd4-a74e-e4bae54a36dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659\"" Jun 21 04:46:03.085172 containerd[1744]: time="2025-06-21T04:46:03.085147939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 21 04:46:04.114557 kubelet[3171]: I0621 04:46:04.114521 3171 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47ae10e-2333-42b7-8cf2-7115ceac8e9d" path="/var/lib/kubelet/pods/c47ae10e-2333-42b7-8cf2-7115ceac8e9d/volumes" Jun 21 04:46:04.410148 containerd[1744]: time="2025-06-21T04:46:04.410101148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:04.412043 containerd[1744]: time="2025-06-21T04:46:04.412008912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 21 04:46:04.415362 containerd[1744]: time="2025-06-21T04:46:04.415338676Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:04.419980 containerd[1744]: time="2025-06-21T04:46:04.419944214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:04.420375 containerd[1744]: time="2025-06-21T04:46:04.420353835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.335177422s" Jun 21 04:46:04.420422 containerd[1744]: time="2025-06-21T04:46:04.420382292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 21 04:46:04.422479 containerd[1744]: time="2025-06-21T04:46:04.422236261Z" level=info msg="CreateContainer within sandbox \"a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 21 04:46:04.439355 containerd[1744]: time="2025-06-21T04:46:04.439333852Z" level=info msg="Container 4cc55c5cc67be31968eb05c356f0831d77c9eae911d7ec1f8f258e507adabcdc: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:04.457702 containerd[1744]: time="2025-06-21T04:46:04.457678519Z" level=info msg="CreateContainer within sandbox \"a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4cc55c5cc67be31968eb05c356f0831d77c9eae911d7ec1f8f258e507adabcdc\"" Jun 21 04:46:04.458350 containerd[1744]: time="2025-06-21T04:46:04.458317356Z" level=info msg="StartContainer for \"4cc55c5cc67be31968eb05c356f0831d77c9eae911d7ec1f8f258e507adabcdc\"" Jun 21 04:46:04.460452 containerd[1744]: time="2025-06-21T04:46:04.460333339Z" level=info msg="connecting to shim 4cc55c5cc67be31968eb05c356f0831d77c9eae911d7ec1f8f258e507adabcdc" address="unix:///run/containerd/s/9291197f2cabc5b8453c10888844e385880d2c649155aef6a741e985ea41a43b" protocol=ttrpc version=3 Jun 21 04:46:04.492537 systemd[1]: Started cri-containerd-4cc55c5cc67be31968eb05c356f0831d77c9eae911d7ec1f8f258e507adabcdc.scope - libcontainer container 4cc55c5cc67be31968eb05c356f0831d77c9eae911d7ec1f8f258e507adabcdc. Jun 21 04:46:04.580805 containerd[1744]: time="2025-06-21T04:46:04.580784123Z" level=info msg="StartContainer for \"4cc55c5cc67be31968eb05c356f0831d77c9eae911d7ec1f8f258e507adabcdc\" returns successfully" Jun 21 04:46:04.582506 containerd[1744]: time="2025-06-21T04:46:04.582482441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 21 04:46:04.735301 systemd-networkd[1362]: cali2f7123eed1a: Gained IPv6LL Jun 21 04:46:06.115163 containerd[1744]: time="2025-06-21T04:46:06.114508999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-fpgt8,Uid:01f187bd-0f09-46c5-9589-33497fd2bdc3,Namespace:calico-system,Attempt:0,}" Jun 21 04:46:06.115163 containerd[1744]: time="2025-06-21T04:46:06.114824329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768675b895-xb6qq,Uid:868e6168-f734-44ed-91f7-c6242b05f8db,Namespace:calico-apiserver,Attempt:0,}" Jun 21 04:46:06.221046 systemd-networkd[1362]: cali1d5ac89c845: Link UP Jun 21 04:46:06.222127 systemd-networkd[1362]: cali1d5ac89c845: Gained carrier Jun 21 04:46:06.236532 containerd[1744]: 2025-06-21 04:46:06.150 [INFO][4466] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 04:46:06.236532 containerd[1744]: 2025-06-21 04:46:06.160 [INFO][4466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0 goldmane-5bd85449d4- calico-system 01f187bd-0f09-46c5-9589-33497fd2bdc3 834 0 2025-06-21 04:45:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.0-a-c1262e9e80 goldmane-5bd85449d4-fpgt8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1d5ac89c845 [] [] }} ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Namespace="calico-system" Pod="goldmane-5bd85449d4-fpgt8" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-" Jun 21 04:46:06.236532 containerd[1744]: 2025-06-21 04:46:06.160 [INFO][4466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Namespace="calico-system" Pod="goldmane-5bd85449d4-fpgt8" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" Jun 21 04:46:06.236532 containerd[1744]: 2025-06-21 04:46:06.189 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" HandleID="k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Workload="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.190 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" HandleID="k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Workload="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-a-c1262e9e80", "pod":"goldmane-5bd85449d4-fpgt8", "timestamp":"2025-06-21 04:46:06.189931996 +0000 UTC"}, Hostname:"ci-4372.0.0-a-c1262e9e80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.190 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.190 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.190 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-a-c1262e9e80' Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.195 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.199 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.203 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.205 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.236798 containerd[1744]: 2025-06-21 04:46:06.207 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.237038 containerd[1744]: 2025-06-21 04:46:06.207 [INFO][4490] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.237038 containerd[1744]: 2025-06-21 04:46:06.208 [INFO][4490] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad Jun 21 04:46:06.237038 containerd[1744]: 2025-06-21 04:46:06.211 [INFO][4490] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.237038 containerd[1744]: 2025-06-21 04:46:06.215 [INFO][4490] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.66/26] block=192.168.82.64/26 handle="k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.237038 containerd[1744]: 2025-06-21 04:46:06.215 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.66/26] handle="k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.237038 containerd[1744]: 2025-06-21 04:46:06.215 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 04:46:06.237038 containerd[1744]: 2025-06-21 04:46:06.215 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.66/26] IPv6=[] ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" HandleID="k8s-pod-network.6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Workload="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" Jun 21 04:46:06.237202 containerd[1744]: 2025-06-21 04:46:06.217 [INFO][4466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Namespace="calico-system" Pod="goldmane-5bd85449d4-fpgt8" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"01f187bd-0f09-46c5-9589-33497fd2bdc3", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"", Pod:"goldmane-5bd85449d4-fpgt8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1d5ac89c845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:06.237202 containerd[1744]: 2025-06-21 04:46:06.217 [INFO][4466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.66/32] ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Namespace="calico-system" Pod="goldmane-5bd85449d4-fpgt8" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" Jun 21 04:46:06.237304 containerd[1744]: 2025-06-21 04:46:06.217 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d5ac89c845 ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Namespace="calico-system" Pod="goldmane-5bd85449d4-fpgt8" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" Jun 21 04:46:06.237304 containerd[1744]: 2025-06-21 04:46:06.222 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Namespace="calico-system" Pod="goldmane-5bd85449d4-fpgt8" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" Jun 21 04:46:06.237350 containerd[1744]: 2025-06-21 04:46:06.222 [INFO][4466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Namespace="calico-system" Pod="goldmane-5bd85449d4-fpgt8" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"01f187bd-0f09-46c5-9589-33497fd2bdc3", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad", Pod:"goldmane-5bd85449d4-fpgt8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1d5ac89c845", MAC:"4e:a9:b4:db:e7:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:06.237405 containerd[1744]: 2025-06-21 04:46:06.235 [INFO][4466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" Namespace="calico-system" Pod="goldmane-5bd85449d4-fpgt8" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-goldmane--5bd85449d4--fpgt8-eth0" Jun 21 04:46:06.274676 containerd[1744]: time="2025-06-21T04:46:06.274623205Z" level=info msg="connecting to shim 6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad" address="unix:///run/containerd/s/f5cc5e696310a6289793e690a6ed299e604b2a060413b40aa8b1c1110ef6ba13" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:06.298366 systemd[1]: Started cri-containerd-6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad.scope - libcontainer container 6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad. Jun 21 04:46:06.303927 kubelet[3171]: I0621 04:46:06.303906 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:46:06.342520 systemd-networkd[1362]: calif967df4d8d4: Link UP Jun 21 04:46:06.344490 systemd-networkd[1362]: calif967df4d8d4: Gained carrier Jun 21 04:46:06.363836 containerd[1744]: 2025-06-21 04:46:06.158 [INFO][4476] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 04:46:06.363836 containerd[1744]: 2025-06-21 04:46:06.169 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0 calico-apiserver-768675b895- calico-apiserver 868e6168-f734-44ed-91f7-c6242b05f8db 833 0 2025-06-21 04:45:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768675b895 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.0-a-c1262e9e80 calico-apiserver-768675b895-xb6qq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif967df4d8d4 [] [] }} ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-xb6qq" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-" Jun 21 04:46:06.363836 containerd[1744]: 2025-06-21 04:46:06.169 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-xb6qq" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" Jun 21 04:46:06.363836 containerd[1744]: 2025-06-21 04:46:06.207 [INFO][4495] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" HandleID="k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.207 [INFO][4495] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" HandleID="k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.0-a-c1262e9e80", "pod":"calico-apiserver-768675b895-xb6qq", "timestamp":"2025-06-21 04:46:06.207065565 +0000 UTC"}, Hostname:"ci-4372.0.0-a-c1262e9e80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.207 [INFO][4495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.215 [INFO][4495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.216 [INFO][4495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-a-c1262e9e80' Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.296 [INFO][4495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.299 [INFO][4495] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.305 [INFO][4495] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.308 [INFO][4495] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364025 containerd[1744]: 2025-06-21 04:46:06.311 [INFO][4495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364531 containerd[1744]: 2025-06-21 04:46:06.311 [INFO][4495] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364531 containerd[1744]: 2025-06-21 04:46:06.313 [INFO][4495] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143 Jun 21 04:46:06.364531 containerd[1744]: 2025-06-21 04:46:06.317 [INFO][4495] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364531 containerd[1744]: 2025-06-21 04:46:06.335 [INFO][4495] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.67/26] block=192.168.82.64/26 handle="k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364531 containerd[1744]: 2025-06-21 04:46:06.335 [INFO][4495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.67/26] handle="k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:06.364531 containerd[1744]: 2025-06-21 04:46:06.335 [INFO][4495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 04:46:06.364531 containerd[1744]: 2025-06-21 04:46:06.335 [INFO][4495] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.67/26] IPv6=[] ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" HandleID="k8s-pod-network.3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" Jun 21 04:46:06.364669 containerd[1744]: 2025-06-21 04:46:06.339 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-xb6qq" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0", GenerateName:"calico-apiserver-768675b895-", Namespace:"calico-apiserver", SelfLink:"", UID:"868e6168-f734-44ed-91f7-c6242b05f8db", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768675b895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"", Pod:"calico-apiserver-768675b895-xb6qq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif967df4d8d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:06.364735 containerd[1744]: 2025-06-21 04:46:06.339 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.67/32] ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-xb6qq" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" Jun 21 04:46:06.364735 containerd[1744]: 2025-06-21 04:46:06.339 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif967df4d8d4 ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-xb6qq" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" Jun 21 04:46:06.364735 containerd[1744]: 2025-06-21 04:46:06.341 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-xb6qq" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" Jun 21 04:46:06.364801 containerd[1744]: 2025-06-21 04:46:06.341 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-xb6qq" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0", GenerateName:"calico-apiserver-768675b895-", Namespace:"calico-apiserver", SelfLink:"", UID:"868e6168-f734-44ed-91f7-c6242b05f8db", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768675b895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143", Pod:"calico-apiserver-768675b895-xb6qq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif967df4d8d4", MAC:"be:21:20:7e:c4:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:06.364858 containerd[1744]: 2025-06-21 04:46:06.361 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-xb6qq" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--xb6qq-eth0" Jun 21 04:46:06.372403 containerd[1744]: time="2025-06-21T04:46:06.372336912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-fpgt8,Uid:01f187bd-0f09-46c5-9589-33497fd2bdc3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad\"" Jun 21 04:46:06.409355 containerd[1744]: time="2025-06-21T04:46:06.409283695Z" level=info msg="connecting to shim 3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143" address="unix:///run/containerd/s/5f784bf1b0b8a01a2d41918ea54842aa53fb588d5305b8f13f755ad9fd384410" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:06.431371 systemd[1]: Started cri-containerd-3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143.scope - libcontainer container 3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143. Jun 21 04:46:06.478363 containerd[1744]: time="2025-06-21T04:46:06.478341774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768675b895-xb6qq,Uid:868e6168-f734-44ed-91f7-c6242b05f8db,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143\"" Jun 21 04:46:06.901311 containerd[1744]: time="2025-06-21T04:46:06.901277748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:06.904181 containerd[1744]: time="2025-06-21T04:46:06.904156497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 21 04:46:06.907574 containerd[1744]: time="2025-06-21T04:46:06.907548041Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:06.912295 containerd[1744]: time="2025-06-21T04:46:06.912261364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:06.912942 containerd[1744]: time="2025-06-21T04:46:06.912917419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 2.330399515s" Jun 21 04:46:06.913004 containerd[1744]: time="2025-06-21T04:46:06.912947892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 21 04:46:06.914581 containerd[1744]: time="2025-06-21T04:46:06.914249418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 21 04:46:06.916629 containerd[1744]: time="2025-06-21T04:46:06.916605827Z" level=info msg="CreateContainer within sandbox \"a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 21 04:46:06.932491 containerd[1744]: time="2025-06-21T04:46:06.932460229Z" level=info msg="Container cd09ada3de53f033c13003cd7ba7d638ae97bf77819112995caa995c9ce5b923: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:06.947327 containerd[1744]: time="2025-06-21T04:46:06.947282635Z" level=info msg="CreateContainer within sandbox \"a0d0ae9bb589b967fc1754350f14e14b8237ea65ffa1621c5a16f63cbdfd0659\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cd09ada3de53f033c13003cd7ba7d638ae97bf77819112995caa995c9ce5b923\"" Jun 21 04:46:06.947820 containerd[1744]: time="2025-06-21T04:46:06.947748728Z" level=info msg="StartContainer for \"cd09ada3de53f033c13003cd7ba7d638ae97bf77819112995caa995c9ce5b923\"" Jun 21 04:46:06.948842 containerd[1744]: time="2025-06-21T04:46:06.948817472Z" level=info msg="connecting to shim cd09ada3de53f033c13003cd7ba7d638ae97bf77819112995caa995c9ce5b923" address="unix:///run/containerd/s/9291197f2cabc5b8453c10888844e385880d2c649155aef6a741e985ea41a43b" protocol=ttrpc version=3 Jun 21 04:46:06.974285 systemd[1]: Started cri-containerd-cd09ada3de53f033c13003cd7ba7d638ae97bf77819112995caa995c9ce5b923.scope - libcontainer container cd09ada3de53f033c13003cd7ba7d638ae97bf77819112995caa995c9ce5b923. Jun 21 04:46:07.026663 containerd[1744]: time="2025-06-21T04:46:07.026568317Z" level=info msg="StartContainer for \"cd09ada3de53f033c13003cd7ba7d638ae97bf77819112995caa995c9ce5b923\" returns successfully" Jun 21 04:46:07.114303 containerd[1744]: time="2025-06-21T04:46:07.113711427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567d4b869c-qhpx9,Uid:0202b89e-2f60-4b65-b532-8b4d6e514c14,Namespace:calico-system,Attempt:0,}" Jun 21 04:46:07.114303 containerd[1744]: time="2025-06-21T04:46:07.114077493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768675b895-x9fxb,Uid:98ef1438-f351-4a64-a0d7-db96dac74994,Namespace:calico-apiserver,Attempt:0,}" Jun 21 04:46:07.114303 containerd[1744]: time="2025-06-21T04:46:07.114206353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qg8wm,Uid:315099f2-eb90-4232-abe6-6ea3b8311656,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:07.128045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066936984.mount: Deactivated successfully. Jun 21 04:46:07.174272 systemd-networkd[1362]: vxlan.calico: Link UP Jun 21 04:46:07.174276 systemd-networkd[1362]: vxlan.calico: Gained carrier Jun 21 04:46:07.270277 kubelet[3171]: I0621 04:46:07.270231 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-776d4dccf9-xzldh" podStartSLOduration=1.441041986 podStartE2EDuration="5.27021351s" podCreationTimestamp="2025-06-21 04:46:02 +0000 UTC" firstStartedPulling="2025-06-21 04:46:03.084850082 +0000 UTC m=+33.067724938" lastFinishedPulling="2025-06-21 04:46:06.914021609 +0000 UTC m=+36.896896462" observedRunningTime="2025-06-21 04:46:07.269698499 +0000 UTC m=+37.252573358" watchObservedRunningTime="2025-06-21 04:46:07.27021351 +0000 UTC m=+37.253088367" Jun 21 04:46:07.339635 systemd-networkd[1362]: cali541de2b2023: Link UP Jun 21 04:46:07.340578 systemd-networkd[1362]: cali541de2b2023: Gained carrier Jun 21 04:46:07.359258 containerd[1744]: 2025-06-21 04:46:07.201 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0 calico-kube-controllers-567d4b869c- calico-system 0202b89e-2f60-4b65-b532-8b4d6e514c14 826 0 2025-06-21 04:45:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:567d4b869c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.0-a-c1262e9e80 calico-kube-controllers-567d4b869c-qhpx9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali541de2b2023 [] [] }} ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Namespace="calico-system" Pod="calico-kube-controllers-567d4b869c-qhpx9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-" Jun 21 04:46:07.359258 containerd[1744]: 2025-06-21 04:46:07.202 [INFO][4716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Namespace="calico-system" Pod="calico-kube-controllers-567d4b869c-qhpx9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" Jun 21 04:46:07.359258 containerd[1744]: 2025-06-21 04:46:07.275 [INFO][4762] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" HandleID="k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.276 [INFO][4762] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" HandleID="k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad330), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-a-c1262e9e80", "pod":"calico-kube-controllers-567d4b869c-qhpx9", "timestamp":"2025-06-21 04:46:07.275918584 +0000 UTC"}, Hostname:"ci-4372.0.0-a-c1262e9e80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.276 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.276 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.276 [INFO][4762] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-a-c1262e9e80' Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.286 [INFO][4762] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.296 [INFO][4762] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.301 [INFO][4762] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.309 [INFO][4762] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360100 containerd[1744]: 2025-06-21 04:46:07.312 [INFO][4762] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360539 containerd[1744]: 2025-06-21 04:46:07.312 [INFO][4762] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360539 containerd[1744]: 2025-06-21 04:46:07.314 [INFO][4762] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6 Jun 21 04:46:07.360539 containerd[1744]: 2025-06-21 04:46:07.320 [INFO][4762] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360539 containerd[1744]: 2025-06-21 04:46:07.328 [INFO][4762] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.68/26] block=192.168.82.64/26 handle="k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360539 containerd[1744]: 2025-06-21 04:46:07.328 [INFO][4762] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.68/26] handle="k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.360539 containerd[1744]: 2025-06-21 04:46:07.328 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 04:46:07.360539 containerd[1744]: 2025-06-21 04:46:07.328 [INFO][4762] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.68/26] IPv6=[] ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" HandleID="k8s-pod-network.e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" Jun 21 04:46:07.360681 containerd[1744]: 2025-06-21 04:46:07.331 [INFO][4716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Namespace="calico-system" Pod="calico-kube-controllers-567d4b869c-qhpx9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0", GenerateName:"calico-kube-controllers-567d4b869c-", Namespace:"calico-system", SelfLink:"", UID:"0202b89e-2f60-4b65-b532-8b4d6e514c14", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567d4b869c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"", Pod:"calico-kube-controllers-567d4b869c-qhpx9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali541de2b2023", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:07.360751 containerd[1744]: 2025-06-21 04:46:07.331 [INFO][4716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.68/32] ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Namespace="calico-system" Pod="calico-kube-controllers-567d4b869c-qhpx9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" Jun 21 04:46:07.360751 containerd[1744]: 2025-06-21 04:46:07.331 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali541de2b2023 ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Namespace="calico-system" Pod="calico-kube-controllers-567d4b869c-qhpx9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" Jun 21 04:46:07.360751 containerd[1744]: 2025-06-21 04:46:07.341 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Namespace="calico-system" Pod="calico-kube-controllers-567d4b869c-qhpx9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" Jun 21 04:46:07.361404 containerd[1744]: 2025-06-21 04:46:07.341 [INFO][4716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Namespace="calico-system" Pod="calico-kube-controllers-567d4b869c-qhpx9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0", GenerateName:"calico-kube-controllers-567d4b869c-", Namespace:"calico-system", SelfLink:"", UID:"0202b89e-2f60-4b65-b532-8b4d6e514c14", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567d4b869c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6", Pod:"calico-kube-controllers-567d4b869c-qhpx9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali541de2b2023", MAC:"32:26:b1:07:a0:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:07.361485 containerd[1744]: 2025-06-21 04:46:07.357 [INFO][4716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" Namespace="calico-system" Pod="calico-kube-controllers-567d4b869c-qhpx9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--kube--controllers--567d4b869c--qhpx9-eth0" Jun 21 04:46:07.392725 containerd[1744]: time="2025-06-21T04:46:07.392693307Z" level=info msg="connecting to shim e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6" address="unix:///run/containerd/s/886adba9407feef55b1d2a1c5d288c0834b2573caf9f803ed1c915af9a4a005c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:07.424257 systemd[1]: Started cri-containerd-e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6.scope - libcontainer container e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6. Jun 21 04:46:07.437404 systemd-networkd[1362]: cali931755efcae: Link UP Jun 21 04:46:07.438356 systemd-networkd[1362]: cali931755efcae: Gained carrier Jun 21 04:46:07.454551 containerd[1744]: 2025-06-21 04:46:07.237 [INFO][4736] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0 calico-apiserver-768675b895- calico-apiserver 98ef1438-f351-4a64-a0d7-db96dac74994 836 0 2025-06-21 04:45:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768675b895 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.0-a-c1262e9e80 calico-apiserver-768675b895-x9fxb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali931755efcae [] [] }} ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-x9fxb" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-" Jun 21 04:46:07.454551 containerd[1744]: 2025-06-21 04:46:07.237 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-x9fxb" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" Jun 21 04:46:07.454551 containerd[1744]: 2025-06-21 04:46:07.314 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" HandleID="k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.315 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" HandleID="k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb9f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.0-a-c1262e9e80", "pod":"calico-apiserver-768675b895-x9fxb", "timestamp":"2025-06-21 04:46:07.314867524 +0000 UTC"}, Hostname:"ci-4372.0.0-a-c1262e9e80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.315 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.328 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.328 [INFO][4778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-a-c1262e9e80' Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.387 [INFO][4778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.396 [INFO][4778] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.405 [INFO][4778] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.407 [INFO][4778] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.454951 containerd[1744]: 2025-06-21 04:46:07.408 [INFO][4778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.455203 containerd[1744]: 2025-06-21 04:46:07.408 [INFO][4778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.455203 containerd[1744]: 2025-06-21 04:46:07.416 [INFO][4778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544 Jun 21 04:46:07.455203 containerd[1744]: 2025-06-21 04:46:07.422 [INFO][4778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.455203 containerd[1744]: 2025-06-21 04:46:07.431 [INFO][4778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.69/26] block=192.168.82.64/26 handle="k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.455203 containerd[1744]: 2025-06-21 04:46:07.432 [INFO][4778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.69/26] handle="k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.455203 containerd[1744]: 2025-06-21 04:46:07.432 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 04:46:07.455203 containerd[1744]: 2025-06-21 04:46:07.432 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.69/26] IPv6=[] ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" HandleID="k8s-pod-network.4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Workload="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" Jun 21 04:46:07.455371 containerd[1744]: 2025-06-21 04:46:07.435 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-x9fxb" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0", GenerateName:"calico-apiserver-768675b895-", Namespace:"calico-apiserver", SelfLink:"", UID:"98ef1438-f351-4a64-a0d7-db96dac74994", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768675b895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"", Pod:"calico-apiserver-768675b895-x9fxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali931755efcae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:07.455434 containerd[1744]: 2025-06-21 04:46:07.436 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.69/32] ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-x9fxb" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" Jun 21 04:46:07.455434 containerd[1744]: 2025-06-21 04:46:07.436 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali931755efcae ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-x9fxb" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" Jun 21 04:46:07.455434 containerd[1744]: 2025-06-21 04:46:07.438 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-x9fxb" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" Jun 21 04:46:07.455508 containerd[1744]: 2025-06-21 04:46:07.439 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-x9fxb" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0", GenerateName:"calico-apiserver-768675b895-", Namespace:"calico-apiserver", SelfLink:"", UID:"98ef1438-f351-4a64-a0d7-db96dac74994", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768675b895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544", Pod:"calico-apiserver-768675b895-x9fxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali931755efcae", MAC:"26:08:2a:32:0e:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:07.455565 containerd[1744]: 2025-06-21 04:46:07.450 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" Namespace="calico-apiserver" Pod="calico-apiserver-768675b895-x9fxb" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-calico--apiserver--768675b895--x9fxb-eth0" Jun 21 04:46:07.512421 containerd[1744]: time="2025-06-21T04:46:07.512388149Z" level=info msg="connecting to shim 4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544" address="unix:///run/containerd/s/24694f636c7710cdf811ee3259bc1882845adb2160531c410812a1ed9f57e6c7" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:07.526691 containerd[1744]: time="2025-06-21T04:46:07.526665403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567d4b869c-qhpx9,Uid:0202b89e-2f60-4b65-b532-8b4d6e514c14,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6\"" Jun 21 04:46:07.538296 systemd[1]: Started cri-containerd-4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544.scope - libcontainer container 4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544. Jun 21 04:46:07.555059 systemd-networkd[1362]: cali2552db9b082: Link UP Jun 21 04:46:07.555905 systemd-networkd[1362]: cali2552db9b082: Gained carrier Jun 21 04:46:07.574688 containerd[1744]: 2025-06-21 04:46:07.242 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0 coredns-668d6bf9bc- kube-system 315099f2-eb90-4232-abe6-6ea3b8311656 835 0 2025-06-21 04:45:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.0-a-c1262e9e80 coredns-668d6bf9bc-qg8wm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2552db9b082 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-qg8wm" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-" Jun 21 04:46:07.574688 containerd[1744]: 2025-06-21 04:46:07.242 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-qg8wm" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" Jun 21 04:46:07.574688 containerd[1744]: 2025-06-21 04:46:07.316 [INFO][4780] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" HandleID="k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Workload="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.316 [INFO][4780] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" HandleID="k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Workload="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.0-a-c1262e9e80", "pod":"coredns-668d6bf9bc-qg8wm", "timestamp":"2025-06-21 04:46:07.316244995 +0000 UTC"}, Hostname:"ci-4372.0.0-a-c1262e9e80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.316 [INFO][4780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.432 [INFO][4780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.432 [INFO][4780] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-a-c1262e9e80' Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.486 [INFO][4780] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.497 [INFO][4780] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.505 [INFO][4780] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.508 [INFO][4780] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.574834 containerd[1744]: 2025-06-21 04:46:07.511 [INFO][4780] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.575032 containerd[1744]: 2025-06-21 04:46:07.511 [INFO][4780] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.575032 containerd[1744]: 2025-06-21 04:46:07.513 [INFO][4780] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce Jun 21 04:46:07.575032 containerd[1744]: 2025-06-21 04:46:07.525 [INFO][4780] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.575032 containerd[1744]: 2025-06-21 04:46:07.547 [INFO][4780] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.70/26] block=192.168.82.64/26 handle="k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.575032 containerd[1744]: 2025-06-21 04:46:07.547 [INFO][4780] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.70/26] handle="k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:07.575032 containerd[1744]: 2025-06-21 04:46:07.547 [INFO][4780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 04:46:07.575032 containerd[1744]: 2025-06-21 04:46:07.547 [INFO][4780] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.70/26] IPv6=[] ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" HandleID="k8s-pod-network.3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Workload="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" Jun 21 04:46:07.575182 containerd[1744]: 2025-06-21 04:46:07.551 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-qg8wm" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"315099f2-eb90-4232-abe6-6ea3b8311656", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"", Pod:"coredns-668d6bf9bc-qg8wm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2552db9b082", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:07.575182 containerd[1744]: 2025-06-21 04:46:07.551 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.70/32] ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-qg8wm" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" Jun 21 04:46:07.575182 containerd[1744]: 2025-06-21 04:46:07.552 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2552db9b082 ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-qg8wm" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" Jun 21 04:46:07.575182 containerd[1744]: 2025-06-21 04:46:07.558 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-qg8wm" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" Jun 21 04:46:07.575182 containerd[1744]: 2025-06-21 04:46:07.559 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-qg8wm" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"315099f2-eb90-4232-abe6-6ea3b8311656", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce", Pod:"coredns-668d6bf9bc-qg8wm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2552db9b082", MAC:"52:48:57:0d:98:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:07.575182 containerd[1744]: 2025-06-21 04:46:07.571 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-qg8wm" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--qg8wm-eth0" Jun 21 04:46:07.606064 containerd[1744]: time="2025-06-21T04:46:07.605943493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768675b895-x9fxb,Uid:98ef1438-f351-4a64-a0d7-db96dac74994,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544\"" Jun 21 04:46:07.607976 containerd[1744]: time="2025-06-21T04:46:07.607935562Z" level=info msg="connecting to shim 3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce" address="unix:///run/containerd/s/aefb2b17b6cfc55355533e8ad5664dc1c47c24e95998f181d224e77a79d9e811" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:07.628274 systemd[1]: Started cri-containerd-3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce.scope - libcontainer container 3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce. Jun 21 04:46:07.675681 containerd[1744]: time="2025-06-21T04:46:07.675657385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qg8wm,Uid:315099f2-eb90-4232-abe6-6ea3b8311656,Namespace:kube-system,Attempt:0,} returns sandbox id \"3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce\"" Jun 21 04:46:07.677712 containerd[1744]: time="2025-06-21T04:46:07.677589124Z" level=info msg="CreateContainer within sandbox \"3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 04:46:07.690732 containerd[1744]: time="2025-06-21T04:46:07.690607839Z" level=info msg="Container 1c7d556c44ae20c54c18cfd743f7b6cb3534d81a1c1b83acd00bd151cf2b73f7: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:07.701584 containerd[1744]: time="2025-06-21T04:46:07.700931960Z" level=info msg="CreateContainer within sandbox \"3681c228ec24171a8d649293ae542c9cf5bb52bf9281ae9f070924fa0868c2ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c7d556c44ae20c54c18cfd743f7b6cb3534d81a1c1b83acd00bd151cf2b73f7\"" Jun 21 04:46:07.702278 containerd[1744]: time="2025-06-21T04:46:07.702083151Z" level=info msg="StartContainer for \"1c7d556c44ae20c54c18cfd743f7b6cb3534d81a1c1b83acd00bd151cf2b73f7\"" Jun 21 04:46:07.703638 containerd[1744]: time="2025-06-21T04:46:07.703615881Z" level=info msg="connecting to shim 1c7d556c44ae20c54c18cfd743f7b6cb3534d81a1c1b83acd00bd151cf2b73f7" address="unix:///run/containerd/s/aefb2b17b6cfc55355533e8ad5664dc1c47c24e95998f181d224e77a79d9e811" protocol=ttrpc version=3 Jun 21 04:46:07.722300 systemd[1]: Started cri-containerd-1c7d556c44ae20c54c18cfd743f7b6cb3534d81a1c1b83acd00bd151cf2b73f7.scope - libcontainer container 1c7d556c44ae20c54c18cfd743f7b6cb3534d81a1c1b83acd00bd151cf2b73f7. Jun 21 04:46:07.749731 containerd[1744]: time="2025-06-21T04:46:07.749709922Z" level=info msg="StartContainer for \"1c7d556c44ae20c54c18cfd743f7b6cb3534d81a1c1b83acd00bd151cf2b73f7\" returns successfully" Jun 21 04:46:07.807257 systemd-networkd[1362]: calif967df4d8d4: Gained IPv6LL Jun 21 04:46:07.871235 systemd-networkd[1362]: cali1d5ac89c845: Gained IPv6LL Jun 21 04:46:08.113584 containerd[1744]: time="2025-06-21T04:46:08.113286983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-575w9,Uid:f516b2d9-b3a6-47e2-926b-c9ca81ee80c8,Namespace:calico-system,Attempt:0,}" Jun 21 04:46:08.219632 systemd-networkd[1362]: cali08e43e26412: Link UP Jun 21 04:46:08.220119 systemd-networkd[1362]: cali08e43e26412: Gained carrier Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.153 [INFO][5028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0 csi-node-driver- calico-system f516b2d9-b3a6-47e2-926b-c9ca81ee80c8 720 0 2025-06-21 04:45:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.0-a-c1262e9e80 csi-node-driver-575w9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali08e43e26412 [] [] }} ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Namespace="calico-system" Pod="csi-node-driver-575w9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.153 [INFO][5028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Namespace="calico-system" Pod="csi-node-driver-575w9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.184 [INFO][5040] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" HandleID="k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Workload="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.184 [INFO][5040] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" HandleID="k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Workload="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-a-c1262e9e80", "pod":"csi-node-driver-575w9", "timestamp":"2025-06-21 04:46:08.184014737 +0000 UTC"}, Hostname:"ci-4372.0.0-a-c1262e9e80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.184 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.184 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.184 [INFO][5040] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-a-c1262e9e80' Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.190 [INFO][5040] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.195 [INFO][5040] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.198 [INFO][5040] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.200 [INFO][5040] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.202 [INFO][5040] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.202 [INFO][5040] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.203 [INFO][5040] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172 Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.207 [INFO][5040] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.216 [INFO][5040] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.71/26] block=192.168.82.64/26 handle="k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.216 [INFO][5040] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.71/26] handle="k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.216 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 04:46:08.234836 containerd[1744]: 2025-06-21 04:46:08.216 [INFO][5040] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.71/26] IPv6=[] ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" HandleID="k8s-pod-network.6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Workload="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" Jun 21 04:46:08.235897 containerd[1744]: 2025-06-21 04:46:08.217 [INFO][5028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Namespace="calico-system" Pod="csi-node-driver-575w9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f516b2d9-b3a6-47e2-926b-c9ca81ee80c8", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"", Pod:"csi-node-driver-575w9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08e43e26412", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:08.235897 containerd[1744]: 2025-06-21 04:46:08.217 [INFO][5028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.71/32] ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Namespace="calico-system" Pod="csi-node-driver-575w9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" Jun 21 04:46:08.235897 containerd[1744]: 2025-06-21 04:46:08.217 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08e43e26412 ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Namespace="calico-system" Pod="csi-node-driver-575w9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" Jun 21 04:46:08.235897 containerd[1744]: 2025-06-21 04:46:08.220 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Namespace="calico-system" Pod="csi-node-driver-575w9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" Jun 21 04:46:08.235897 containerd[1744]: 2025-06-21 04:46:08.220 [INFO][5028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Namespace="calico-system" Pod="csi-node-driver-575w9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f516b2d9-b3a6-47e2-926b-c9ca81ee80c8", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172", Pod:"csi-node-driver-575w9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08e43e26412", MAC:"96:03:59:0a:e2:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:08.235897 containerd[1744]: 2025-06-21 04:46:08.231 [INFO][5028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" Namespace="calico-system" Pod="csi-node-driver-575w9" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-csi--node--driver--575w9-eth0" Jun 21 04:46:08.281778 containerd[1744]: time="2025-06-21T04:46:08.281453807Z" level=info msg="connecting to shim 6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172" address="unix:///run/containerd/s/c14c3c4709f0240361f1234c1073819b9ef601879ee0f4c602159bd102f9a62f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:08.294613 kubelet[3171]: I0621 04:46:08.294560 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qg8wm" podStartSLOduration=38.294543081 podStartE2EDuration="38.294543081s" podCreationTimestamp="2025-06-21 04:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:46:08.294225808 +0000 UTC m=+38.277100664" watchObservedRunningTime="2025-06-21 04:46:08.294543081 +0000 UTC m=+38.277417956" Jun 21 04:46:08.304414 systemd[1]: Started cri-containerd-6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172.scope - libcontainer container 6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172. Jun 21 04:46:08.336065 containerd[1744]: time="2025-06-21T04:46:08.336005764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-575w9,Uid:f516b2d9-b3a6-47e2-926b-c9ca81ee80c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172\"" Jun 21 04:46:08.831302 systemd-networkd[1362]: cali541de2b2023: Gained IPv6LL Jun 21 04:46:08.959393 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Jun 21 04:46:09.088218 systemd-networkd[1362]: cali931755efcae: Gained IPv6LL Jun 21 04:46:09.113651 containerd[1744]: time="2025-06-21T04:46:09.113626436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5gdr,Uid:3d565bf7-19f8-49d8-b320-84e77a8d7785,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:09.145168 kubelet[3171]: I0621 04:46:09.144639 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:46:09.256228 containerd[1744]: time="2025-06-21T04:46:09.255898174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\" id:\"9c6a8ccc75bd419dbc2acec6ce08c831c9b5dd37d4f6d06f0a1cdd3635a7fa56\" pid:5133 exited_at:{seconds:1750481169 nanos:255163934}" Jun 21 04:46:09.279223 systemd-networkd[1362]: caliba33c2b1b2c: Link UP Jun 21 04:46:09.283446 systemd-networkd[1362]: caliba33c2b1b2c: Gained carrier Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.158 [INFO][5107] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0 coredns-668d6bf9bc- kube-system 3d565bf7-19f8-49d8-b320-84e77a8d7785 830 0 2025-06-21 04:45:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.0-a-c1262e9e80 coredns-668d6bf9bc-x5gdr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliba33c2b1b2c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5gdr" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.158 [INFO][5107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5gdr" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.210 [INFO][5140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" HandleID="k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Workload="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.211 [INFO][5140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" HandleID="k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Workload="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7230), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.0-a-c1262e9e80", "pod":"coredns-668d6bf9bc-x5gdr", "timestamp":"2025-06-21 04:46:09.210046285 +0000 UTC"}, Hostname:"ci-4372.0.0-a-c1262e9e80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.211 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.212 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.212 [INFO][5140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-a-c1262e9e80' Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.222 [INFO][5140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.227 [INFO][5140] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.231 [INFO][5140] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.233 [INFO][5140] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.235 [INFO][5140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.236 [INFO][5140] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.238 [INFO][5140] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139 Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.248 [INFO][5140] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.258 [INFO][5140] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.72/26] block=192.168.82.64/26 handle="k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.259 [INFO][5140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.72/26] handle="k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" host="ci-4372.0.0-a-c1262e9e80" Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.259 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 04:46:09.320035 containerd[1744]: 2025-06-21 04:46:09.259 [INFO][5140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.72/26] IPv6=[] ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" HandleID="k8s-pod-network.6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Workload="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" Jun 21 04:46:09.320598 containerd[1744]: 2025-06-21 04:46:09.263 [INFO][5107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5gdr" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3d565bf7-19f8-49d8-b320-84e77a8d7785", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"", Pod:"coredns-668d6bf9bc-x5gdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba33c2b1b2c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:09.320598 containerd[1744]: 2025-06-21 04:46:09.263 [INFO][5107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.72/32] ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5gdr" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" Jun 21 04:46:09.320598 containerd[1744]: 2025-06-21 04:46:09.263 [INFO][5107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba33c2b1b2c ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5gdr" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" Jun 21 04:46:09.320598 containerd[1744]: 2025-06-21 04:46:09.290 [INFO][5107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5gdr" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" Jun 21 04:46:09.320598 containerd[1744]: 2025-06-21 04:46:09.292 [INFO][5107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5gdr" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3d565bf7-19f8-49d8-b320-84e77a8d7785", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 4, 45, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-a-c1262e9e80", ContainerID:"6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139", Pod:"coredns-668d6bf9bc-x5gdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba33c2b1b2c", MAC:"12:2c:0f:90:b3:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 04:46:09.320598 containerd[1744]: 2025-06-21 04:46:09.318 [INFO][5107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5gdr" WorkloadEndpoint="ci--4372.0.0--a--c1262e9e80-k8s-coredns--668d6bf9bc--x5gdr-eth0" Jun 21 04:46:09.382029 containerd[1744]: time="2025-06-21T04:46:09.381961648Z" level=info msg="connecting to shim 6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139" address="unix:///run/containerd/s/60fa740bc64385727ea603766626f4e8b951e13b66126b78195998029b64b8d5" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:09.407226 systemd-networkd[1362]: cali08e43e26412: Gained IPv6LL Jun 21 04:46:09.426255 systemd[1]: Started cri-containerd-6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139.scope - libcontainer container 6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139. Jun 21 04:46:09.456913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836233317.mount: Deactivated successfully. Jun 21 04:46:09.458980 containerd[1744]: time="2025-06-21T04:46:09.458957763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\" id:\"ec2a112b155ab6a1706146e12906f3e88970aea03bd23ac4fdc3b18cd9c3aa01\" pid:5167 exited_at:{seconds:1750481169 nanos:458381144}" Jun 21 04:46:09.477176 containerd[1744]: time="2025-06-21T04:46:09.477152497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5gdr,Uid:3d565bf7-19f8-49d8-b320-84e77a8d7785,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139\"" Jun 21 04:46:09.479437 containerd[1744]: time="2025-06-21T04:46:09.479415863Z" level=info msg="CreateContainer within sandbox \"6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 04:46:09.503614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69924584.mount: Deactivated successfully. Jun 21 04:46:09.504742 containerd[1744]: time="2025-06-21T04:46:09.503988433Z" level=info msg="Container 68c4c07554e25ab526aeffa59194a0d307326dc3dfc738c0c21f46181727154e: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:09.515377 containerd[1744]: time="2025-06-21T04:46:09.515260420Z" level=info msg="CreateContainer within sandbox \"6dbb522d79367d897bce022ab6556556cf7860a922caed346a7dc62ed60f6139\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68c4c07554e25ab526aeffa59194a0d307326dc3dfc738c0c21f46181727154e\"" Jun 21 04:46:09.516073 containerd[1744]: time="2025-06-21T04:46:09.516055395Z" level=info msg="StartContainer for \"68c4c07554e25ab526aeffa59194a0d307326dc3dfc738c0c21f46181727154e\"" Jun 21 04:46:09.517005 containerd[1744]: time="2025-06-21T04:46:09.516983346Z" level=info msg="connecting to shim 68c4c07554e25ab526aeffa59194a0d307326dc3dfc738c0c21f46181727154e" address="unix:///run/containerd/s/60fa740bc64385727ea603766626f4e8b951e13b66126b78195998029b64b8d5" protocol=ttrpc version=3 Jun 21 04:46:09.531268 systemd[1]: Started cri-containerd-68c4c07554e25ab526aeffa59194a0d307326dc3dfc738c0c21f46181727154e.scope - libcontainer container 68c4c07554e25ab526aeffa59194a0d307326dc3dfc738c0c21f46181727154e. Jun 21 04:46:09.561704 containerd[1744]: time="2025-06-21T04:46:09.561687107Z" level=info msg="StartContainer for \"68c4c07554e25ab526aeffa59194a0d307326dc3dfc738c0c21f46181727154e\" returns successfully" Jun 21 04:46:09.599232 systemd-networkd[1362]: cali2552db9b082: Gained IPv6LL Jun 21 04:46:09.866908 containerd[1744]: time="2025-06-21T04:46:09.866882481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:09.868908 containerd[1744]: time="2025-06-21T04:46:09.868879387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 21 04:46:09.871459 containerd[1744]: time="2025-06-21T04:46:09.871110525Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:09.874930 containerd[1744]: time="2025-06-21T04:46:09.874903948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:09.875335 containerd[1744]: time="2025-06-21T04:46:09.875313779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 2.96103424s" Jun 21 04:46:09.875378 containerd[1744]: time="2025-06-21T04:46:09.875339648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 21 04:46:09.876298 containerd[1744]: time="2025-06-21T04:46:09.876278170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 04:46:09.877270 containerd[1744]: time="2025-06-21T04:46:09.877250899Z" level=info msg="CreateContainer within sandbox \"6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 21 04:46:09.895464 containerd[1744]: time="2025-06-21T04:46:09.895440479Z" level=info msg="Container b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:09.907411 containerd[1744]: time="2025-06-21T04:46:09.907388337Z" level=info msg="CreateContainer within sandbox \"6fdf09219732d007187cc053929851bf6f6befc7f34ca98607929eb7a34c1aad\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\"" Jun 21 04:46:09.907716 containerd[1744]: time="2025-06-21T04:46:09.907689226Z" level=info msg="StartContainer for \"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\"" Jun 21 04:46:09.908663 containerd[1744]: time="2025-06-21T04:46:09.908630824Z" level=info msg="connecting to shim b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd" address="unix:///run/containerd/s/f5cc5e696310a6289793e690a6ed299e604b2a060413b40aa8b1c1110ef6ba13" protocol=ttrpc version=3 Jun 21 04:46:09.923271 systemd[1]: Started cri-containerd-b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd.scope - libcontainer container b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd. Jun 21 04:46:09.964488 containerd[1744]: time="2025-06-21T04:46:09.964469206Z" level=info msg="StartContainer for \"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" returns successfully" Jun 21 04:46:10.300375 kubelet[3171]: I0621 04:46:10.299911 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-fpgt8" podStartSLOduration=23.801568713 podStartE2EDuration="27.299897722s" podCreationTimestamp="2025-06-21 04:45:43 +0000 UTC" firstStartedPulling="2025-06-21 04:46:06.37777902 +0000 UTC m=+36.360653873" lastFinishedPulling="2025-06-21 04:46:09.876108025 +0000 UTC m=+39.858982882" observedRunningTime="2025-06-21 04:46:10.299407653 +0000 UTC m=+40.282282518" watchObservedRunningTime="2025-06-21 04:46:10.299897722 +0000 UTC m=+40.282772655" Jun 21 04:46:11.287158 kubelet[3171]: I0621 04:46:11.287076 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:46:11.327276 systemd-networkd[1362]: caliba33c2b1b2c: Gained IPv6LL Jun 21 04:46:13.023349 containerd[1744]: time="2025-06-21T04:46:13.023308849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:13.025474 containerd[1744]: time="2025-06-21T04:46:13.025438737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 21 04:46:13.027906 containerd[1744]: time="2025-06-21T04:46:13.027867418Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:13.031650 containerd[1744]: time="2025-06-21T04:46:13.031594396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:13.032306 containerd[1744]: time="2025-06-21T04:46:13.031990188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 3.155684308s" Jun 21 04:46:13.032306 containerd[1744]: time="2025-06-21T04:46:13.032017475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 21 04:46:13.033125 containerd[1744]: time="2025-06-21T04:46:13.033100687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 21 04:46:13.034049 containerd[1744]: time="2025-06-21T04:46:13.034013217Z" level=info msg="CreateContainer within sandbox \"3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 04:46:13.051193 containerd[1744]: time="2025-06-21T04:46:13.051165063Z" level=info msg="Container 9aa405fc4f40b525313b74d2709ef0849b7360af8c45656a4f18ea0514e97d48: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:13.056450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418630153.mount: Deactivated successfully. Jun 21 04:46:13.084586 containerd[1744]: time="2025-06-21T04:46:13.084560743Z" level=info msg="CreateContainer within sandbox \"3365be22d494e993d73e5057046cb1f4d4bda43d55c63b18bc29ee0e6223d143\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9aa405fc4f40b525313b74d2709ef0849b7360af8c45656a4f18ea0514e97d48\"" Jun 21 04:46:13.085363 containerd[1744]: time="2025-06-21T04:46:13.084959568Z" level=info msg="StartContainer for \"9aa405fc4f40b525313b74d2709ef0849b7360af8c45656a4f18ea0514e97d48\"" Jun 21 04:46:13.086103 containerd[1744]: time="2025-06-21T04:46:13.086071805Z" level=info msg="connecting to shim 9aa405fc4f40b525313b74d2709ef0849b7360af8c45656a4f18ea0514e97d48" address="unix:///run/containerd/s/5f784bf1b0b8a01a2d41918ea54842aa53fb588d5305b8f13f755ad9fd384410" protocol=ttrpc version=3 Jun 21 04:46:13.107264 systemd[1]: Started cri-containerd-9aa405fc4f40b525313b74d2709ef0849b7360af8c45656a4f18ea0514e97d48.scope - libcontainer container 9aa405fc4f40b525313b74d2709ef0849b7360af8c45656a4f18ea0514e97d48. Jun 21 04:46:13.147285 containerd[1744]: time="2025-06-21T04:46:13.147224260Z" level=info msg="StartContainer for \"9aa405fc4f40b525313b74d2709ef0849b7360af8c45656a4f18ea0514e97d48\" returns successfully" Jun 21 04:46:13.304351 kubelet[3171]: I0621 04:46:13.304264 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x5gdr" podStartSLOduration=43.304247156 podStartE2EDuration="43.304247156s" podCreationTimestamp="2025-06-21 04:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:46:10.31557553 +0000 UTC m=+40.298450384" watchObservedRunningTime="2025-06-21 04:46:13.304247156 +0000 UTC m=+43.287122016" Jun 21 04:46:14.294326 kubelet[3171]: I0621 04:46:14.294289 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:46:16.851606 containerd[1744]: time="2025-06-21T04:46:16.851545517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:16.854153 containerd[1744]: time="2025-06-21T04:46:16.853667535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 21 04:46:16.858154 containerd[1744]: time="2025-06-21T04:46:16.856770976Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:16.862720 containerd[1744]: time="2025-06-21T04:46:16.862687500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:16.863644 containerd[1744]: time="2025-06-21T04:46:16.863611289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 3.830478205s" Jun 21 04:46:16.863717 containerd[1744]: time="2025-06-21T04:46:16.863652910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 21 04:46:16.867375 containerd[1744]: time="2025-06-21T04:46:16.867346958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 04:46:16.881163 containerd[1744]: time="2025-06-21T04:46:16.879754534Z" level=info msg="CreateContainer within sandbox \"e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 21 04:46:16.910093 containerd[1744]: time="2025-06-21T04:46:16.909250101Z" level=info msg="Container 05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:16.925584 containerd[1744]: time="2025-06-21T04:46:16.925557811Z" level=info msg="CreateContainer within sandbox \"e5f2eec2b02831bb3aecade8e4c4ccec15fa195223cef9750b42e864df2253e6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\"" Jun 21 04:46:16.926639 containerd[1744]: time="2025-06-21T04:46:16.926605099Z" level=info msg="StartContainer for \"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\"" Jun 21 04:46:16.927870 containerd[1744]: time="2025-06-21T04:46:16.927845341Z" level=info msg="connecting to shim 05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684" address="unix:///run/containerd/s/886adba9407feef55b1d2a1c5d288c0834b2573caf9f803ed1c915af9a4a005c" protocol=ttrpc version=3 Jun 21 04:46:16.955815 systemd[1]: Started cri-containerd-05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684.scope - libcontainer container 05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684. Jun 21 04:46:17.095609 containerd[1744]: time="2025-06-21T04:46:17.095573776Z" level=info msg="StartContainer for \"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\" returns successfully" Jun 21 04:46:17.186068 containerd[1744]: time="2025-06-21T04:46:17.185522185Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:17.193178 containerd[1744]: time="2025-06-21T04:46:17.191544295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 21 04:46:17.193652 containerd[1744]: time="2025-06-21T04:46:17.193628313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 326.247038ms" Jun 21 04:46:17.193696 containerd[1744]: time="2025-06-21T04:46:17.193660241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 21 04:46:17.195081 containerd[1744]: time="2025-06-21T04:46:17.195055500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 21 04:46:17.195877 containerd[1744]: time="2025-06-21T04:46:17.195848480Z" level=info msg="CreateContainer within sandbox \"4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 04:46:17.216570 containerd[1744]: time="2025-06-21T04:46:17.216547145Z" level=info msg="Container b74de56591aa9eed381b8cc6a76eb635206f2fd08e3a08a5ea94c445e01cb3c0: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:17.233809 containerd[1744]: time="2025-06-21T04:46:17.233781696Z" level=info msg="CreateContainer within sandbox \"4b24de3fcafdd067b9d3662383b8327ca7dea951afaf2c4bd6c554549d050544\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b74de56591aa9eed381b8cc6a76eb635206f2fd08e3a08a5ea94c445e01cb3c0\"" Jun 21 04:46:17.234295 containerd[1744]: time="2025-06-21T04:46:17.234265409Z" level=info msg="StartContainer for \"b74de56591aa9eed381b8cc6a76eb635206f2fd08e3a08a5ea94c445e01cb3c0\"" Jun 21 04:46:17.235317 containerd[1744]: time="2025-06-21T04:46:17.235289834Z" level=info msg="connecting to shim b74de56591aa9eed381b8cc6a76eb635206f2fd08e3a08a5ea94c445e01cb3c0" address="unix:///run/containerd/s/24694f636c7710cdf811ee3259bc1882845adb2160531c410812a1ed9f57e6c7" protocol=ttrpc version=3 Jun 21 04:46:17.253301 systemd[1]: Started cri-containerd-b74de56591aa9eed381b8cc6a76eb635206f2fd08e3a08a5ea94c445e01cb3c0.scope - libcontainer container b74de56591aa9eed381b8cc6a76eb635206f2fd08e3a08a5ea94c445e01cb3c0. Jun 21 04:46:17.303529 containerd[1744]: time="2025-06-21T04:46:17.303498340Z" level=info msg="StartContainer for \"b74de56591aa9eed381b8cc6a76eb635206f2fd08e3a08a5ea94c445e01cb3c0\" returns successfully" Jun 21 04:46:17.327993 kubelet[3171]: I0621 04:46:17.327270 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-768675b895-xb6qq" podStartSLOduration=30.773929218 podStartE2EDuration="37.327254307s" podCreationTimestamp="2025-06-21 04:45:40 +0000 UTC" firstStartedPulling="2025-06-21 04:46:06.479422197 +0000 UTC m=+36.462297049" lastFinishedPulling="2025-06-21 04:46:13.032747288 +0000 UTC m=+43.015622138" observedRunningTime="2025-06-21 04:46:13.305991943 +0000 UTC m=+43.288866800" watchObservedRunningTime="2025-06-21 04:46:17.327254307 +0000 UTC m=+47.310129161" Jun 21 04:46:17.346681 kubelet[3171]: I0621 04:46:17.346337 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-567d4b869c-qhpx9" podStartSLOduration=25.011357533 podStartE2EDuration="34.346323066s" podCreationTimestamp="2025-06-21 04:45:43 +0000 UTC" firstStartedPulling="2025-06-21 04:46:07.530660361 +0000 UTC m=+37.513535213" lastFinishedPulling="2025-06-21 04:46:16.865625893 +0000 UTC m=+46.848500746" observedRunningTime="2025-06-21 04:46:17.329468678 +0000 UTC m=+47.312343522" watchObservedRunningTime="2025-06-21 04:46:17.346323066 +0000 UTC m=+47.329197923" Jun 21 04:46:17.349296 kubelet[3171]: I0621 04:46:17.349253 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-768675b895-x9fxb" podStartSLOduration=27.762049575 podStartE2EDuration="37.349240027s" podCreationTimestamp="2025-06-21 04:45:40 +0000 UTC" firstStartedPulling="2025-06-21 04:46:07.607092513 +0000 UTC m=+37.589967368" lastFinishedPulling="2025-06-21 04:46:17.194282965 +0000 UTC m=+47.177157820" observedRunningTime="2025-06-21 04:46:17.347373163 +0000 UTC m=+47.330248009" watchObservedRunningTime="2025-06-21 04:46:17.349240027 +0000 UTC m=+47.332114882" Jun 21 04:46:17.396916 containerd[1744]: time="2025-06-21T04:46:17.396888806Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\" id:\"d5df1137081e188534f59fd160a2ad3aa174d165a19be2dbf0f78451146fab2a\" pid:5466 exited_at:{seconds:1750481177 nanos:396274286}" Jun 21 04:46:18.328073 kubelet[3171]: I0621 04:46:18.328049 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:46:18.609378 containerd[1744]: time="2025-06-21T04:46:18.609283778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:18.612110 containerd[1744]: time="2025-06-21T04:46:18.612074252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 21 04:46:18.614220 containerd[1744]: time="2025-06-21T04:46:18.614187612Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:18.621721 containerd[1744]: time="2025-06-21T04:46:18.621688701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:18.625536 containerd[1744]: time="2025-06-21T04:46:18.625506228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.430413034s" Jun 21 04:46:18.625611 containerd[1744]: time="2025-06-21T04:46:18.625540043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 21 04:46:18.632383 containerd[1744]: time="2025-06-21T04:46:18.632355226Z" level=info msg="CreateContainer within sandbox \"6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 21 04:46:18.648237 containerd[1744]: time="2025-06-21T04:46:18.648209462Z" level=info msg="Container 42976f5fb03fe8a174f97f1688a74b1da42d2873b6ba35482add686946b0f16d: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:18.671399 containerd[1744]: time="2025-06-21T04:46:18.671372659Z" level=info msg="CreateContainer within sandbox \"6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"42976f5fb03fe8a174f97f1688a74b1da42d2873b6ba35482add686946b0f16d\"" Jun 21 04:46:18.672503 containerd[1744]: time="2025-06-21T04:46:18.672481345Z" level=info msg="StartContainer for \"42976f5fb03fe8a174f97f1688a74b1da42d2873b6ba35482add686946b0f16d\"" Jun 21 04:46:18.674467 containerd[1744]: time="2025-06-21T04:46:18.674428009Z" level=info msg="connecting to shim 42976f5fb03fe8a174f97f1688a74b1da42d2873b6ba35482add686946b0f16d" address="unix:///run/containerd/s/c14c3c4709f0240361f1234c1073819b9ef601879ee0f4c602159bd102f9a62f" protocol=ttrpc version=3 Jun 21 04:46:18.701287 systemd[1]: Started cri-containerd-42976f5fb03fe8a174f97f1688a74b1da42d2873b6ba35482add686946b0f16d.scope - libcontainer container 42976f5fb03fe8a174f97f1688a74b1da42d2873b6ba35482add686946b0f16d. Jun 21 04:46:18.778536 containerd[1744]: time="2025-06-21T04:46:18.778512570Z" level=info msg="StartContainer for \"42976f5fb03fe8a174f97f1688a74b1da42d2873b6ba35482add686946b0f16d\" returns successfully" Jun 21 04:46:18.779532 containerd[1744]: time="2025-06-21T04:46:18.779514281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 21 04:46:20.394163 containerd[1744]: time="2025-06-21T04:46:20.393983299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:20.396495 containerd[1744]: time="2025-06-21T04:46:20.396423223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 21 04:46:20.398655 containerd[1744]: time="2025-06-21T04:46:20.398609904Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:20.402928 containerd[1744]: time="2025-06-21T04:46:20.402903041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:20.404180 containerd[1744]: time="2025-06-21T04:46:20.404077783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 1.624538052s" Jun 21 04:46:20.404180 containerd[1744]: time="2025-06-21T04:46:20.404107575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 21 04:46:20.407165 containerd[1744]: time="2025-06-21T04:46:20.406157197Z" level=info msg="CreateContainer within sandbox \"6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 21 04:46:20.426268 containerd[1744]: time="2025-06-21T04:46:20.426221855Z" level=info msg="Container b21739441f39ce87958d7382f91530e1eb87cfe2424646b97e5d9dfacc564f27: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:20.433876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791344640.mount: Deactivated successfully. Jun 21 04:46:20.444376 containerd[1744]: time="2025-06-21T04:46:20.444351932Z" level=info msg="CreateContainer within sandbox \"6a8f93d94efeaa0649cbd84d51d0433c29a98d3fb81ff4ac80e8d50f67e97172\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b21739441f39ce87958d7382f91530e1eb87cfe2424646b97e5d9dfacc564f27\"" Jun 21 04:46:20.444844 containerd[1744]: time="2025-06-21T04:46:20.444826164Z" level=info msg="StartContainer for \"b21739441f39ce87958d7382f91530e1eb87cfe2424646b97e5d9dfacc564f27\"" Jun 21 04:46:20.446426 containerd[1744]: time="2025-06-21T04:46:20.446346872Z" level=info msg="connecting to shim b21739441f39ce87958d7382f91530e1eb87cfe2424646b97e5d9dfacc564f27" address="unix:///run/containerd/s/c14c3c4709f0240361f1234c1073819b9ef601879ee0f4c602159bd102f9a62f" protocol=ttrpc version=3 Jun 21 04:46:20.470275 systemd[1]: Started cri-containerd-b21739441f39ce87958d7382f91530e1eb87cfe2424646b97e5d9dfacc564f27.scope - libcontainer container b21739441f39ce87958d7382f91530e1eb87cfe2424646b97e5d9dfacc564f27. Jun 21 04:46:20.620847 containerd[1744]: time="2025-06-21T04:46:20.619653255Z" level=info msg="StartContainer for \"b21739441f39ce87958d7382f91530e1eb87cfe2424646b97e5d9dfacc564f27\" returns successfully" Jun 21 04:46:21.232965 kubelet[3171]: I0621 04:46:21.232945 3171 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 21 04:46:21.233342 kubelet[3171]: I0621 04:46:21.232973 3171 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 21 04:46:21.355152 kubelet[3171]: I0621 04:46:21.354091 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-575w9" podStartSLOduration=26.28724214 podStartE2EDuration="38.354078514s" podCreationTimestamp="2025-06-21 04:45:43 +0000 UTC" firstStartedPulling="2025-06-21 04:46:08.337885721 +0000 UTC m=+38.320760565" lastFinishedPulling="2025-06-21 04:46:20.404722086 +0000 UTC m=+50.387596939" observedRunningTime="2025-06-21 04:46:21.353885292 +0000 UTC m=+51.336760145" watchObservedRunningTime="2025-06-21 04:46:21.354078514 +0000 UTC m=+51.336953370" Jun 21 04:46:21.550051 kubelet[3171]: I0621 04:46:21.549740 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:46:21.650146 containerd[1744]: time="2025-06-21T04:46:21.650102234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" id:\"9ac4fb8235171bbd665da55f3ce4477b22ffe8f29d01bc5f00e9ce32f025bff2\" pid:5575 exited_at:{seconds:1750481181 nanos:649784960}" Jun 21 04:46:21.761400 containerd[1744]: time="2025-06-21T04:46:21.761367375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" id:\"e7a95a1e0341902f0b5c5b6cda6da411d94674e7bcb175e9862dc22229b44de4\" pid:5596 exited_at:{seconds:1750481181 nanos:761093590}" Jun 21 04:46:30.906037 kubelet[3171]: I0621 04:46:30.905770 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:46:35.813242 kubelet[3171]: I0621 04:46:35.813113 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:46:39.337611 containerd[1744]: time="2025-06-21T04:46:39.337568471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\" id:\"3eea55fec6bfd3e282b606061fb5dfad646ccc14e9ccaa3b306a873cd54d8959\" pid:5640 exited_at:{seconds:1750481199 nanos:337341665}" Jun 21 04:46:42.791305 containerd[1744]: time="2025-06-21T04:46:42.791159143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" id:\"3d653999f678ba2589e45e4abd30501487e9b377637f0fee55761ac2063141bd\" pid:5664 exited_at:{seconds:1750481202 nanos:789937215}" Jun 21 04:46:47.361158 containerd[1744]: time="2025-06-21T04:46:47.360445875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\" id:\"8bab5d9208f1a285b35d6b6deecbed3534e9d9de111c3c0e96775c64223ed8ea\" pid:5687 exited_at:{seconds:1750481207 nanos:359792538}" Jun 21 04:46:51.717860 containerd[1744]: time="2025-06-21T04:46:51.717760568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" id:\"b5c213937bba695b26d1e2e15ec6f539f376b45ea9da5ffb34d32dbfefbeeaf9\" pid:5716 exited_at:{seconds:1750481211 nanos:717505120}" Jun 21 04:46:58.372153 containerd[1744]: time="2025-06-21T04:46:58.372095862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\" id:\"6ebb6c7a226ba784d9e5e141699ca6549d7635b164dabf26c6b5fc9492a9626f\" pid:5740 exited_at:{seconds:1750481218 nanos:371909872}" Jun 21 04:47:09.315836 containerd[1744]: time="2025-06-21T04:47:09.315774666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\" id:\"75f82eb6210b202c3fbf277f51d6b97229f8d78723bce7f0744ac57cb8722fa9\" pid:5763 exited_at:{seconds:1750481229 nanos:315569834}" Jun 21 04:47:17.121533 systemd[1]: Started sshd@7-10.200.8.43:22-10.200.16.10:45746.service - OpenSSH per-connection server daemon (10.200.16.10:45746). Jun 21 04:47:17.340096 containerd[1744]: time="2025-06-21T04:47:17.340063872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\" id:\"878e6170ceec3671a141bc74db5f9daeb68576933f4d7654795b8aecd041f4df\" pid:5793 exited_at:{seconds:1750481237 nanos:339861441}" Jun 21 04:47:17.756364 sshd[5779]: Accepted publickey for core from 10.200.16.10 port 45746 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:17.757302 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:17.761235 systemd-logind[1719]: New session 10 of user core. Jun 21 04:47:17.765292 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 04:47:18.251540 sshd[5803]: Connection closed by 10.200.16.10 port 45746 Jun 21 04:47:18.251924 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:18.254118 systemd[1]: sshd@7-10.200.8.43:22-10.200.16.10:45746.service: Deactivated successfully. Jun 21 04:47:18.255544 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 04:47:18.256184 systemd-logind[1719]: Session 10 logged out. Waiting for processes to exit. Jun 21 04:47:18.257537 systemd-logind[1719]: Removed session 10. Jun 21 04:47:21.728112 containerd[1744]: time="2025-06-21T04:47:21.728071022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" id:\"90d7f81f8e7ded306b162cbe8c51b1a0fe4ff63b1fa7ccf9ebc2275f434a12af\" pid:5829 exited_at:{seconds:1750481241 nanos:727804382}" Jun 21 04:47:23.361744 systemd[1]: Started sshd@8-10.200.8.43:22-10.200.16.10:42136.service - OpenSSH per-connection server daemon (10.200.16.10:42136). Jun 21 04:47:23.998199 sshd[5840]: Accepted publickey for core from 10.200.16.10 port 42136 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:23.999460 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:24.004207 systemd-logind[1719]: New session 11 of user core. Jun 21 04:47:24.008283 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 04:47:24.486884 sshd[5842]: Connection closed by 10.200.16.10 port 42136 Jun 21 04:47:24.487278 sshd-session[5840]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:24.489605 systemd[1]: sshd@8-10.200.8.43:22-10.200.16.10:42136.service: Deactivated successfully. Jun 21 04:47:24.491263 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 04:47:24.491993 systemd-logind[1719]: Session 11 logged out. Waiting for processes to exit. Jun 21 04:47:24.493552 systemd-logind[1719]: Removed session 11. Jun 21 04:47:29.601307 systemd[1]: Started sshd@9-10.200.8.43:22-10.200.16.10:50890.service - OpenSSH per-connection server daemon (10.200.16.10:50890). Jun 21 04:47:30.226903 sshd[5863]: Accepted publickey for core from 10.200.16.10 port 50890 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:30.228368 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:30.234674 systemd-logind[1719]: New session 12 of user core. Jun 21 04:47:30.239289 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 04:47:30.712742 sshd[5867]: Connection closed by 10.200.16.10 port 50890 Jun 21 04:47:30.713173 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:30.715794 systemd[1]: sshd@9-10.200.8.43:22-10.200.16.10:50890.service: Deactivated successfully. Jun 21 04:47:30.717388 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 04:47:30.717977 systemd-logind[1719]: Session 12 logged out. Waiting for processes to exit. Jun 21 04:47:30.719030 systemd-logind[1719]: Removed session 12. Jun 21 04:47:30.836574 systemd[1]: Started sshd@10-10.200.8.43:22-10.200.16.10:50902.service - OpenSSH per-connection server daemon (10.200.16.10:50902). Jun 21 04:47:31.460590 sshd[5880]: Accepted publickey for core from 10.200.16.10 port 50902 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:31.461519 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:31.465257 systemd-logind[1719]: New session 13 of user core. Jun 21 04:47:31.471278 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 04:47:31.967740 sshd[5882]: Connection closed by 10.200.16.10 port 50902 Jun 21 04:47:31.968077 sshd-session[5880]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:31.970534 systemd[1]: sshd@10-10.200.8.43:22-10.200.16.10:50902.service: Deactivated successfully. Jun 21 04:47:31.972023 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 04:47:31.972724 systemd-logind[1719]: Session 13 logged out. Waiting for processes to exit. Jun 21 04:47:31.973791 systemd-logind[1719]: Removed session 13. Jun 21 04:47:32.078642 systemd[1]: Started sshd@11-10.200.8.43:22-10.200.16.10:50912.service - OpenSSH per-connection server daemon (10.200.16.10:50912). Jun 21 04:47:32.706287 sshd[5893]: Accepted publickey for core from 10.200.16.10 port 50912 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:32.708040 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:32.713081 systemd-logind[1719]: New session 14 of user core. Jun 21 04:47:32.718288 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 04:47:33.189922 sshd[5900]: Connection closed by 10.200.16.10 port 50912 Jun 21 04:47:33.190329 sshd-session[5893]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:33.192977 systemd[1]: sshd@11-10.200.8.43:22-10.200.16.10:50912.service: Deactivated successfully. Jun 21 04:47:33.194882 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 04:47:33.195574 systemd-logind[1719]: Session 14 logged out. Waiting for processes to exit. Jun 21 04:47:33.196703 systemd-logind[1719]: Removed session 14. Jun 21 04:47:38.301311 systemd[1]: Started sshd@12-10.200.8.43:22-10.200.16.10:50920.service - OpenSSH per-connection server daemon (10.200.16.10:50920). Jun 21 04:47:38.926323 sshd[5913]: Accepted publickey for core from 10.200.16.10 port 50920 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:38.927356 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:38.931663 systemd-logind[1719]: New session 15 of user core. Jun 21 04:47:38.934276 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 04:47:39.335923 containerd[1744]: time="2025-06-21T04:47:39.335744224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\" id:\"68a94a436ef9646d3122687941e55b04fe905ce845e237b45d14d4656af671a0\" pid:5935 exited_at:{seconds:1750481259 nanos:335086851}" Jun 21 04:47:39.434677 sshd[5915]: Connection closed by 10.200.16.10 port 50920 Jun 21 04:47:39.435034 sshd-session[5913]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:39.437485 systemd[1]: sshd@12-10.200.8.43:22-10.200.16.10:50920.service: Deactivated successfully. Jun 21 04:47:39.439112 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 04:47:39.439855 systemd-logind[1719]: Session 15 logged out. Waiting for processes to exit. Jun 21 04:47:39.441338 systemd-logind[1719]: Removed session 15. Jun 21 04:47:42.640694 containerd[1744]: time="2025-06-21T04:47:42.640574901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" id:\"4859c2648982b72f17059cf6f2cfce99cb6662124fdbd11fecb686b32383506b\" pid:5962 exited_at:{seconds:1750481262 nanos:640328289}" Jun 21 04:47:44.548085 systemd[1]: Started sshd@13-10.200.8.43:22-10.200.16.10:37594.service - OpenSSH per-connection server daemon (10.200.16.10:37594). Jun 21 04:47:45.179827 sshd[5982]: Accepted publickey for core from 10.200.16.10 port 37594 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:45.181154 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:45.189607 systemd-logind[1719]: New session 16 of user core. Jun 21 04:47:45.193493 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 04:47:45.707613 sshd[5984]: Connection closed by 10.200.16.10 port 37594 Jun 21 04:47:45.708307 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:45.714164 systemd[1]: sshd@13-10.200.8.43:22-10.200.16.10:37594.service: Deactivated successfully. Jun 21 04:47:45.716759 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 04:47:45.719336 systemd-logind[1719]: Session 16 logged out. Waiting for processes to exit. Jun 21 04:47:45.722053 systemd-logind[1719]: Removed session 16. Jun 21 04:47:47.344458 containerd[1744]: time="2025-06-21T04:47:47.344410472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\" id:\"a26af4f4a69084c4a0f255cce96c991cae56b0f325c3fdac43c0d671e0cb34de\" pid:6022 exited_at:{seconds:1750481267 nanos:344207109}" Jun 21 04:47:50.823287 systemd[1]: Started sshd@14-10.200.8.43:22-10.200.16.10:60166.service - OpenSSH per-connection server daemon (10.200.16.10:60166). Jun 21 04:47:51.445467 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 60166 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:51.446443 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:51.450489 systemd-logind[1719]: New session 17 of user core. Jun 21 04:47:51.455271 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 04:47:51.735932 containerd[1744]: time="2025-06-21T04:47:51.735841370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" id:\"4221ed166e87ebe1ec7117531fe9eff7605c2dfd54211ed9c8f769960d26d036\" pid:6047 exited_at:{seconds:1750481271 nanos:735642218}" Jun 21 04:47:51.949315 sshd[6034]: Connection closed by 10.200.16.10 port 60166 Jun 21 04:47:51.950393 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:51.954998 systemd[1]: sshd@14-10.200.8.43:22-10.200.16.10:60166.service: Deactivated successfully. Jun 21 04:47:51.957923 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 04:47:51.959725 systemd-logind[1719]: Session 17 logged out. Waiting for processes to exit. Jun 21 04:47:51.961938 systemd-logind[1719]: Removed session 17. Jun 21 04:47:52.061196 systemd[1]: Started sshd@15-10.200.8.43:22-10.200.16.10:60178.service - OpenSSH per-connection server daemon (10.200.16.10:60178). Jun 21 04:47:52.706762 sshd[6068]: Accepted publickey for core from 10.200.16.10 port 60178 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:52.707787 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:52.712504 systemd-logind[1719]: New session 18 of user core. Jun 21 04:47:52.718527 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 04:47:53.297015 sshd[6070]: Connection closed by 10.200.16.10 port 60178 Jun 21 04:47:53.298440 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:53.301309 systemd[1]: sshd@15-10.200.8.43:22-10.200.16.10:60178.service: Deactivated successfully. Jun 21 04:47:53.303906 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 04:47:53.306933 systemd-logind[1719]: Session 18 logged out. Waiting for processes to exit. Jun 21 04:47:53.309033 systemd-logind[1719]: Removed session 18. Jun 21 04:47:53.408879 systemd[1]: Started sshd@16-10.200.8.43:22-10.200.16.10:60186.service - OpenSSH per-connection server daemon (10.200.16.10:60186). Jun 21 04:47:54.040170 sshd[6080]: Accepted publickey for core from 10.200.16.10 port 60186 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:54.041438 sshd-session[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:54.045381 systemd-logind[1719]: New session 19 of user core. Jun 21 04:47:54.053255 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 04:47:55.285851 sshd[6082]: Connection closed by 10.200.16.10 port 60186 Jun 21 04:47:55.286385 sshd-session[6080]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:55.289419 systemd[1]: sshd@16-10.200.8.43:22-10.200.16.10:60186.service: Deactivated successfully. Jun 21 04:47:55.291011 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 04:47:55.291870 systemd-logind[1719]: Session 19 logged out. Waiting for processes to exit. Jun 21 04:47:55.292872 systemd-logind[1719]: Removed session 19. Jun 21 04:47:55.399659 systemd[1]: Started sshd@17-10.200.8.43:22-10.200.16.10:60192.service - OpenSSH per-connection server daemon (10.200.16.10:60192). Jun 21 04:47:56.023663 sshd[6099]: Accepted publickey for core from 10.200.16.10 port 60192 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:56.024670 sshd-session[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:56.028609 systemd-logind[1719]: New session 20 of user core. Jun 21 04:47:56.033405 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 04:47:56.589042 sshd[6101]: Connection closed by 10.200.16.10 port 60192 Jun 21 04:47:56.589501 sshd-session[6099]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:56.591747 systemd[1]: sshd@17-10.200.8.43:22-10.200.16.10:60192.service: Deactivated successfully. Jun 21 04:47:56.593457 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 04:47:56.594804 systemd-logind[1719]: Session 20 logged out. Waiting for processes to exit. Jun 21 04:47:56.596022 systemd-logind[1719]: Removed session 20. Jun 21 04:47:56.698376 systemd[1]: Started sshd@18-10.200.8.43:22-10.200.16.10:60204.service - OpenSSH per-connection server daemon (10.200.16.10:60204). Jun 21 04:47:57.325456 sshd[6111]: Accepted publickey for core from 10.200.16.10 port 60204 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:57.326355 sshd-session[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:57.330156 systemd-logind[1719]: New session 21 of user core. Jun 21 04:47:57.335275 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 04:47:57.809045 sshd[6113]: Connection closed by 10.200.16.10 port 60204 Jun 21 04:47:57.809461 sshd-session[6111]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:57.812010 systemd[1]: sshd@18-10.200.8.43:22-10.200.16.10:60204.service: Deactivated successfully. Jun 21 04:47:57.813774 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 04:47:57.814477 systemd-logind[1719]: Session 21 logged out. Waiting for processes to exit. Jun 21 04:47:57.815872 systemd-logind[1719]: Removed session 21. Jun 21 04:47:58.379812 containerd[1744]: time="2025-06-21T04:47:58.379775666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\" id:\"d7394014e72b26d5286d8c129ac1433cbcbe79a16c86944a8211bd86f09f813f\" pid:6136 exited_at:{seconds:1750481278 nanos:379557016}" Jun 21 04:48:02.985907 systemd[1]: Started sshd@19-10.200.8.43:22-10.200.16.10:55116.service - OpenSSH per-connection server daemon (10.200.16.10:55116). Jun 21 04:48:03.644637 sshd[6151]: Accepted publickey for core from 10.200.16.10 port 55116 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:03.645620 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:03.649603 systemd-logind[1719]: New session 22 of user core. Jun 21 04:48:03.652270 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 04:48:04.149534 sshd[6153]: Connection closed by 10.200.16.10 port 55116 Jun 21 04:48:04.149972 sshd-session[6151]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:04.152591 systemd[1]: sshd@19-10.200.8.43:22-10.200.16.10:55116.service: Deactivated successfully. Jun 21 04:48:04.154236 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 04:48:04.154932 systemd-logind[1719]: Session 22 logged out. Waiting for processes to exit. Jun 21 04:48:04.156570 systemd-logind[1719]: Removed session 22. Jun 21 04:48:09.264417 systemd[1]: Started sshd@20-10.200.8.43:22-10.200.16.10:34810.service - OpenSSH per-connection server daemon (10.200.16.10:34810). Jun 21 04:48:09.325463 containerd[1744]: time="2025-06-21T04:48:09.325432775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\" id:\"09d123bc510681ad21b20c448163d71d990755ced9019dfbc53d60c54a6a66b1\" pid:6178 exited_at:{seconds:1750481289 nanos:325201372}" Jun 21 04:48:09.899757 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 34810 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:09.900725 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:09.904079 systemd-logind[1719]: New session 23 of user core. Jun 21 04:48:09.909297 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 04:48:10.386946 sshd[6191]: Connection closed by 10.200.16.10 port 34810 Jun 21 04:48:10.387600 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:10.390512 systemd[1]: sshd@20-10.200.8.43:22-10.200.16.10:34810.service: Deactivated successfully. Jun 21 04:48:10.392513 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 04:48:10.393180 systemd-logind[1719]: Session 23 logged out. Waiting for processes to exit. Jun 21 04:48:10.394233 systemd-logind[1719]: Removed session 23. Jun 21 04:48:15.502025 systemd[1]: Started sshd@21-10.200.8.43:22-10.200.16.10:34820.service - OpenSSH per-connection server daemon (10.200.16.10:34820). Jun 21 04:48:16.132876 sshd[6203]: Accepted publickey for core from 10.200.16.10 port 34820 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:16.133841 sshd-session[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:16.137837 systemd-logind[1719]: New session 24 of user core. Jun 21 04:48:16.142280 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 04:48:16.620144 sshd[6206]: Connection closed by 10.200.16.10 port 34820 Jun 21 04:48:16.620582 sshd-session[6203]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:16.623233 systemd[1]: sshd@21-10.200.8.43:22-10.200.16.10:34820.service: Deactivated successfully. Jun 21 04:48:16.624953 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 04:48:16.625660 systemd-logind[1719]: Session 24 logged out. Waiting for processes to exit. Jun 21 04:48:16.626696 systemd-logind[1719]: Removed session 24. Jun 21 04:48:17.340387 containerd[1744]: time="2025-06-21T04:48:17.340327409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05dff787c53cb709bbb3035f53cb82d3cdd300f4801547f15141b08ee084d684\" id:\"4358b35c55bf2e81916bd6afe527fd4ae8c1f74c992be512a6231f5e1360fedb\" pid:6229 exited_at:{seconds:1750481297 nanos:339857469}" Jun 21 04:48:21.715104 containerd[1744]: time="2025-06-21T04:48:21.715061802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b336c59baccbda0d5556f7352178d5afb3abd0e17cc6530944368d58eebbfd\" id:\"a0e3c915e256d5b09eee008f79495d0909e5b0fb0afa09b3a87bb561a63ec5d8\" pid:6252 exited_at:{seconds:1750481301 nanos:714848469}" Jun 21 04:48:21.730702 systemd[1]: Started sshd@22-10.200.8.43:22-10.200.16.10:40724.service - OpenSSH per-connection server daemon (10.200.16.10:40724). Jun 21 04:48:22.358346 sshd[6264]: Accepted publickey for core from 10.200.16.10 port 40724 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:22.359322 sshd-session[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:22.363585 systemd-logind[1719]: New session 25 of user core. Jun 21 04:48:22.372290 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 04:48:22.843593 sshd[6266]: Connection closed by 10.200.16.10 port 40724 Jun 21 04:48:22.844000 sshd-session[6264]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:22.846262 systemd[1]: sshd@22-10.200.8.43:22-10.200.16.10:40724.service: Deactivated successfully. Jun 21 04:48:22.847869 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 04:48:22.849032 systemd-logind[1719]: Session 25 logged out. Waiting for processes to exit. Jun 21 04:48:22.850200 systemd-logind[1719]: Removed session 25. Jun 21 04:48:27.956893 systemd[1]: Started sshd@23-10.200.8.43:22-10.200.16.10:40726.service - OpenSSH per-connection server daemon (10.200.16.10:40726). Jun 21 04:48:28.584945 sshd[6278]: Accepted publickey for core from 10.200.16.10 port 40726 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:28.585937 sshd-session[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:28.590237 systemd-logind[1719]: New session 26 of user core. Jun 21 04:48:28.594305 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 21 04:48:29.067730 sshd[6280]: Connection closed by 10.200.16.10 port 40726 Jun 21 04:48:29.068172 sshd-session[6278]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:29.070795 systemd[1]: sshd@23-10.200.8.43:22-10.200.16.10:40726.service: Deactivated successfully. Jun 21 04:48:29.072328 systemd[1]: session-26.scope: Deactivated successfully. Jun 21 04:48:29.072982 systemd-logind[1719]: Session 26 logged out. Waiting for processes to exit. Jun 21 04:48:29.074014 systemd-logind[1719]: Removed session 26. Jun 21 04:48:34.185027 systemd[1]: Started sshd@24-10.200.8.43:22-10.200.16.10:47530.service - OpenSSH per-connection server daemon (10.200.16.10:47530). Jun 21 04:48:34.818352 sshd[6296]: Accepted publickey for core from 10.200.16.10 port 47530 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:34.819542 sshd-session[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:34.823662 systemd-logind[1719]: New session 27 of user core. Jun 21 04:48:34.825280 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 21 04:48:35.307439 sshd[6298]: Connection closed by 10.200.16.10 port 47530 Jun 21 04:48:35.307885 sshd-session[6296]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:35.310453 systemd[1]: sshd@24-10.200.8.43:22-10.200.16.10:47530.service: Deactivated successfully. Jun 21 04:48:35.312253 systemd[1]: session-27.scope: Deactivated successfully. Jun 21 04:48:35.313470 systemd-logind[1719]: Session 27 logged out. Waiting for processes to exit. Jun 21 04:48:35.314939 systemd-logind[1719]: Removed session 27. Jun 21 04:48:39.316983 containerd[1744]: time="2025-06-21T04:48:39.316892064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb7c9acbe39c806af6ab388b649e7dbafcf7acdd43752747fec23af472893f7e\" id:\"0f98517345a66fed27da9bb741fade3f66b5fe9ee66ad46bd408c95e5c58b44e\" pid:6323 exited_at:{seconds:1750481319 nanos:316598761}"