Jun 20 19:14:11.049848 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:14:11.049881 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:11.049892 kernel: BIOS-provided physical RAM map: Jun 20 19:14:11.049900 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 19:14:11.049907 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 20 19:14:11.049914 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jun 20 19:14:11.049923 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jun 20 19:14:11.049932 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jun 20 19:14:11.049939 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jun 20 19:14:11.049946 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 20 19:14:11.049954 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 20 19:14:11.049961 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 20 19:14:11.049968 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 20 19:14:11.049976 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 20 19:14:11.049987 kernel: NX (Execute Disable) protection: active Jun 20 19:14:11.049995 kernel: APIC: Static calls initialized Jun 20 19:14:11.050003 kernel: efi: EFI v2.7 by Microsoft Jun 20 19:14:11.050011 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Jun 20 19:14:11.050019 kernel: random: crng init done Jun 20 19:14:11.050027 kernel: secureboot: Secure boot disabled Jun 20 19:14:11.050034 kernel: SMBIOS 3.1.0 present. Jun 20 19:14:11.050043 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jun 20 19:14:11.050050 kernel: DMI: Memory slots populated: 2/2 Jun 20 19:14:11.050060 kernel: Hypervisor detected: Microsoft Hyper-V Jun 20 19:14:11.050067 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 20 19:14:11.050074 kernel: Hyper-V: Nested features: 0x3e0101 Jun 20 19:14:11.050082 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 20 19:14:11.050089 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 20 19:14:11.050097 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:14:11.050104 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:14:11.050111 kernel: tsc: Detected 2300.001 MHz processor Jun 20 19:14:11.050119 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:14:11.050127 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:14:11.050136 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 20 19:14:11.050144 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 19:14:11.050151 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:14:11.050159 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 20 19:14:11.050167 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 20 19:14:11.050174 kernel: Using GB pages for direct mapping Jun 20 19:14:11.050182 kernel: ACPI: Early table checksum verification disabled Jun 20 19:14:11.050193 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 20 19:14:11.050203 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:11.050211 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:11.050218 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 19:14:11.050226 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 20 19:14:11.050234 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:11.050242 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:11.050252 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:11.050260 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:14:11.050268 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:14:11.050276 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:11.050284 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 20 19:14:11.050292 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jun 20 19:14:11.050300 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 20 19:14:11.050308 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 20 19:14:11.050316 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 20 19:14:11.050325 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 20 19:14:11.050333 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 20 19:14:11.050340 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 20 19:14:11.050348 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 20 19:14:11.050356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 20 19:14:11.050364 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 20 19:14:11.050372 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 20 19:14:11.050380 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 20 19:14:11.050388 kernel: Zone ranges: Jun 20 19:14:11.050398 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:14:11.050406 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:14:11.050414 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:14:11.050422 kernel: Device empty Jun 20 19:14:11.050430 kernel: Movable zone start for each node Jun 20 19:14:11.050438 kernel: Early memory node ranges Jun 20 19:14:11.050446 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 19:14:11.050454 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jun 20 19:14:11.050461 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jun 20 19:14:11.050470 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 20 19:14:11.050478 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:14:11.050486 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 20 19:14:11.050493 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:14:11.050500 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 19:14:11.050574 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jun 20 19:14:11.050585 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jun 20 19:14:11.050593 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 20 19:14:11.050602 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:14:11.050613 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:14:11.050621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:14:11.050630 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 20 19:14:11.050638 kernel: TSC deadline timer available Jun 20 19:14:11.050646 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:14:11.050655 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:14:11.050663 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:14:11.050671 kernel: CPU topo: Max. threads per core: 2 Jun 20 19:14:11.050679 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:14:11.050688 kernel: CPU topo: Num. threads per package: 2 Jun 20 19:14:11.050694 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:14:11.050701 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 20 19:14:11.050710 kernel: Booting paravirtualized kernel on Hyper-V Jun 20 19:14:11.050718 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:14:11.050729 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:14:11.050743 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:14:11.050751 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:14:11.050759 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:14:11.050769 kernel: Hyper-V: PV spinlocks enabled Jun 20 19:14:11.050777 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:14:11.050785 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:11.050793 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:14:11.050801 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 19:14:11.050809 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:14:11.050822 kernel: Fallback order for Node 0: 0 Jun 20 19:14:11.050829 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jun 20 19:14:11.050839 kernel: Policy zone: Normal Jun 20 19:14:11.050848 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:14:11.050856 kernel: software IO TLB: area num 2. Jun 20 19:14:11.050864 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:14:11.050877 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:14:11.050885 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:14:11.050892 kernel: Dynamic Preempt: voluntary Jun 20 19:14:11.050900 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:14:11.050909 kernel: rcu: RCU event tracing is enabled. Jun 20 19:14:11.050926 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:14:11.050936 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:14:11.050946 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:14:11.050957 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:14:11.050967 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:14:11.050976 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:14:11.050985 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:11.050995 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:11.051004 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:11.051014 kernel: Using NULL legacy PIC Jun 20 19:14:11.051025 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 20 19:14:11.051035 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:14:11.051044 kernel: Console: colour dummy device 80x25 Jun 20 19:14:11.051053 kernel: printk: legacy console [tty1] enabled Jun 20 19:14:11.051063 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:14:11.051072 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 20 19:14:11.051081 kernel: ACPI: Core revision 20240827 Jun 20 19:14:11.051092 kernel: Failed to register legacy timer interrupt Jun 20 19:14:11.051101 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:14:11.051111 kernel: x2apic enabled Jun 20 19:14:11.051120 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:14:11.051130 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jun 20 19:14:11.051140 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 19:14:11.051149 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 20 19:14:11.051159 kernel: Hyper-V: Using IPI hypercalls Jun 20 19:14:11.051168 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 20 19:14:11.051179 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 20 19:14:11.051188 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 20 19:14:11.051198 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 20 19:14:11.051208 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 20 19:14:11.051217 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 20 19:14:11.051227 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Jun 20 19:14:11.051237 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300001) Jun 20 19:14:11.051246 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:14:11.051257 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:14:11.051267 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:14:11.051276 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:14:11.051285 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:14:11.051294 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:14:11.051303 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 19:14:11.051312 kernel: RETBleed: Vulnerable Jun 20 19:14:11.051322 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:14:11.051330 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:14:11.051339 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:14:11.051348 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:14:11.051359 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:14:11.051367 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 19:14:11.051376 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 19:14:11.051385 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 19:14:11.051394 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 20 19:14:11.051403 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 20 19:14:11.051412 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 20 19:14:11.051421 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:14:11.051430 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 20 19:14:11.051439 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 20 19:14:11.051448 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 20 19:14:11.051458 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 20 19:14:11.051467 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 20 19:14:11.051476 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 20 19:14:11.051485 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 20 19:14:11.051494 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:14:11.051503 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:14:11.051525 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:14:11.051534 kernel: landlock: Up and running. Jun 20 19:14:11.051541 kernel: SELinux: Initializing. Jun 20 19:14:11.051549 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:14:11.051557 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:14:11.051567 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 20 19:14:11.051577 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 20 19:14:11.051586 kernel: signal: max sigframe size: 11952 Jun 20 19:14:11.051595 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:14:11.051603 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:14:11.051611 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:14:11.051619 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:14:11.051629 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:14:11.051637 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:14:11.051646 kernel: .... node #0, CPUs: #1 Jun 20 19:14:11.051656 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:14:11.051665 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jun 20 19:14:11.051675 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299988K reserved, 0K cma-reserved) Jun 20 19:14:11.051684 kernel: devtmpfs: initialized Jun 20 19:14:11.051693 kernel: x86/mm: Memory block size: 128MB Jun 20 19:14:11.051702 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 20 19:14:11.051712 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:14:11.051721 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:14:11.051730 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:14:11.051741 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:14:11.051750 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:14:11.051759 kernel: audit: type=2000 audit(1750446847.029:1): state=initialized audit_enabled=0 res=1 Jun 20 19:14:11.051768 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:14:11.051777 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:14:11.051786 kernel: cpuidle: using governor menu Jun 20 19:14:11.051795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:14:11.051804 kernel: dca service started, version 1.12.1 Jun 20 19:14:11.051813 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jun 20 19:14:11.051824 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jun 20 19:14:11.051834 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:14:11.051843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:14:11.051852 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:14:11.051862 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:14:11.051871 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:14:11.051880 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:14:11.051890 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:14:11.051901 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:14:11.051910 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:14:11.051919 kernel: ACPI: Interpreter enabled Jun 20 19:14:11.051929 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:14:11.051938 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:14:11.051947 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:14:11.051957 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 19:14:11.051966 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 20 19:14:11.051975 kernel: iommu: Default domain type: Translated Jun 20 19:14:11.051984 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:14:11.051995 kernel: efivars: Registered efivars operations Jun 20 19:14:11.052004 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:14:11.052014 kernel: PCI: System does not support PCI Jun 20 19:14:11.052023 kernel: vgaarb: loaded Jun 20 19:14:11.052032 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:14:11.052041 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:14:11.052051 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:14:11.052060 kernel: pnp: PnP ACPI init Jun 20 19:14:11.052069 kernel: pnp: PnP ACPI: found 3 devices Jun 20 19:14:11.052080 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:14:11.052090 kernel: NET: Registered PF_INET protocol family Jun 20 19:14:11.052099 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:14:11.052108 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 19:14:11.052118 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:14:11.052127 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:14:11.052137 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:14:11.052146 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 19:14:11.052157 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:14:11.052167 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:14:11.052176 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:14:11.052185 kernel: NET: Registered PF_XDP protocol family Jun 20 19:14:11.052195 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:14:11.052204 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:14:11.052213 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Jun 20 19:14:11.052223 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 20 19:14:11.052232 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 20 19:14:11.052243 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Jun 20 19:14:11.052253 kernel: clocksource: Switched to clocksource tsc Jun 20 19:14:11.052262 kernel: Initialise system trusted keyrings Jun 20 19:14:11.052272 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 19:14:11.052281 kernel: Key type asymmetric registered Jun 20 19:14:11.052290 kernel: Asymmetric key parser 'x509' registered Jun 20 19:14:11.052299 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:14:11.052309 kernel: io scheduler mq-deadline registered Jun 20 19:14:11.052318 kernel: io scheduler kyber registered Jun 20 19:14:11.052329 kernel: io scheduler bfq registered Jun 20 19:14:11.052339 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:14:11.052348 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:14:11.052358 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:14:11.052367 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 19:14:11.052377 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:14:11.052386 kernel: i8042: PNP: No PS/2 controller found. Jun 20 19:14:11.059006 kernel: rtc_cmos 00:02: registered as rtc0 Jun 20 19:14:11.059147 kernel: rtc_cmos 00:02: setting system clock to 2025-06-20T19:14:10 UTC (1750446850) Jun 20 19:14:11.059225 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 20 19:14:11.059237 kernel: intel_pstate: Intel P-state driver initializing Jun 20 19:14:11.059247 kernel: efifb: probing for efifb Jun 20 19:14:11.059256 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 19:14:11.059266 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 19:14:11.059275 kernel: efifb: scrolling: redraw Jun 20 19:14:11.059285 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:14:11.059294 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:14:11.059306 kernel: fb0: EFI VGA frame buffer device Jun 20 19:14:11.059315 kernel: pstore: Using crash dump compression: deflate Jun 20 19:14:11.059325 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:14:11.059334 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:14:11.059343 kernel: Segment Routing with IPv6 Jun 20 19:14:11.059353 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:14:11.059362 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:14:11.059371 kernel: Key type dns_resolver registered Jun 20 19:14:11.059381 kernel: IPI shorthand broadcast: enabled Jun 20 19:14:11.059392 kernel: sched_clock: Marking stable (3150004069, 97994727)->(3627997219, -379998423) Jun 20 19:14:11.059401 kernel: registered taskstats version 1 Jun 20 19:14:11.059410 kernel: Loading compiled-in X.509 certificates Jun 20 19:14:11.059420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:14:11.059429 kernel: Demotion targets for Node 0: null Jun 20 19:14:11.059438 kernel: Key type .fscrypt registered Jun 20 19:14:11.059447 kernel: Key type fscrypt-provisioning registered Jun 20 19:14:11.059457 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:14:11.059466 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:14:11.059478 kernel: ima: No architecture policies found Jun 20 19:14:11.059487 kernel: clk: Disabling unused clocks Jun 20 19:14:11.059496 kernel: Warning: unable to open an initial console. Jun 20 19:14:11.059505 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:14:11.059579 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:14:11.059590 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:14:11.059599 kernel: Run /init as init process Jun 20 19:14:11.059608 kernel: with arguments: Jun 20 19:14:11.059617 kernel: /init Jun 20 19:14:11.059628 kernel: with environment: Jun 20 19:14:11.059637 kernel: HOME=/ Jun 20 19:14:11.059646 kernel: TERM=linux Jun 20 19:14:11.059655 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:14:11.059666 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:14:11.059680 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:14:11.059691 systemd[1]: Detected virtualization microsoft. Jun 20 19:14:11.059703 systemd[1]: Detected architecture x86-64. Jun 20 19:14:11.059712 systemd[1]: Running in initrd. Jun 20 19:14:11.059722 systemd[1]: No hostname configured, using default hostname. Jun 20 19:14:11.059732 systemd[1]: Hostname set to . Jun 20 19:14:11.059741 systemd[1]: Initializing machine ID from random generator. Jun 20 19:14:11.059751 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:14:11.059761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:14:11.059771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:14:11.059784 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:14:11.059794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:14:11.059804 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:14:11.059815 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:14:11.059826 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:14:11.059836 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:14:11.059846 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:14:11.059858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:14:11.059868 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:14:11.059878 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:14:11.059888 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:14:11.059898 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:14:11.059908 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:14:11.059918 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:14:11.059928 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:14:11.059938 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:14:11.059950 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:14:11.059960 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:14:11.059970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:14:11.059986 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:14:11.059996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:14:11.060008 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:14:11.060018 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:14:11.060029 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:14:11.060041 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:14:11.060051 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:14:11.060061 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:14:11.060087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:11.060099 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:14:11.060130 systemd-journald[205]: Collecting audit messages is disabled. Jun 20 19:14:11.060159 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:14:11.060169 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:14:11.060181 systemd-journald[205]: Journal started Jun 20 19:14:11.060207 systemd-journald[205]: Runtime Journal (/run/log/journal/72ca12e155644c4fb948ef75baeb9149) is 8M, max 158.9M, 150.9M free. Jun 20 19:14:11.047549 systemd-modules-load[206]: Inserted module 'overlay' Jun 20 19:14:11.065708 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:14:11.070636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:14:11.078612 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:14:11.086159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:11.093980 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:14:11.098959 kernel: Bridge firewalling registered Jun 20 19:14:11.098600 systemd-modules-load[206]: Inserted module 'br_netfilter' Jun 20 19:14:11.098659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:14:11.102484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:14:11.103280 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:14:11.109015 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:14:11.129683 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:14:11.133867 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:14:11.139097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:14:11.140906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:14:11.143675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:14:11.163576 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:14:11.169704 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:14:11.174699 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:14:11.189831 systemd-resolved[233]: Positive Trust Anchors: Jun 20 19:14:11.191556 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:14:11.194150 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:14:11.207611 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:11.223132 systemd-resolved[233]: Defaulting to hostname 'linux'. Jun 20 19:14:11.226114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:14:11.230659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:14:11.267538 kernel: SCSI subsystem initialized Jun 20 19:14:11.276533 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:14:11.285543 kernel: iscsi: registered transport (tcp) Jun 20 19:14:11.304885 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:14:11.304965 kernel: QLogic iSCSI HBA Driver Jun 20 19:14:11.319860 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:14:11.333174 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:14:11.337148 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:14:11.373483 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:14:11.376611 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:14:11.428540 kernel: raid6: avx512x4 gen() 43727 MB/s Jun 20 19:14:11.446523 kernel: raid6: avx512x2 gen() 42010 MB/s Jun 20 19:14:11.463522 kernel: raid6: avx512x1 gen() 25032 MB/s Jun 20 19:14:11.482529 kernel: raid6: avx2x4 gen() 33931 MB/s Jun 20 19:14:11.499521 kernel: raid6: avx2x2 gen() 37275 MB/s Jun 20 19:14:11.517777 kernel: raid6: avx2x1 gen() 25426 MB/s Jun 20 19:14:11.517795 kernel: raid6: using algorithm avx512x4 gen() 43727 MB/s Jun 20 19:14:11.537846 kernel: raid6: .... xor() 6988 MB/s, rmw enabled Jun 20 19:14:11.537888 kernel: raid6: using avx512x2 recovery algorithm Jun 20 19:14:11.556530 kernel: xor: automatically using best checksumming function avx Jun 20 19:14:11.679546 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:14:11.684721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:14:11.687715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:14:11.710407 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jun 20 19:14:11.714478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:14:11.722926 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:14:11.746453 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jun 20 19:14:11.766152 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:14:11.768636 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:14:11.811943 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:14:11.818838 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:14:11.863546 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:14:11.887269 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 19:14:11.893109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:14:11.893180 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:11.897653 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:11.906648 kernel: AES CTR mode by8 optimization enabled Jun 20 19:14:11.917557 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 19:14:11.917603 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 19:14:11.917757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:11.927533 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:14:11.927581 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 19:14:11.934338 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 19:14:11.934380 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 19:14:11.940374 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 19:14:11.939642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:14:11.948622 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 19:14:11.939743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:11.947460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:11.955766 kernel: PTP clock support registered Jun 20 19:14:11.968804 kernel: hv_vmbus: registering driver hv_pci Jun 20 19:14:11.973565 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 19:14:11.982764 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 19:14:11.982881 kernel: hv_vmbus: registering driver hv_utils Jun 20 19:14:11.996543 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 20 19:14:12.004203 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 19:14:12.004314 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 19:14:12.007766 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 19:14:12.007812 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 19:14:11.591981 systemd-resolved[233]: Clock change detected. Flushing caches. Jun 20 19:14:11.598537 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 20 19:14:11.598687 systemd-journald[205]: Time jumped backwards, rotating. Jun 20 19:14:11.598725 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 20 19:14:11.601457 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:14:11.598164 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:11.616164 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52656f60 (unnamed net_device) (uninitialized): VF slot 1 added Jun 20 19:14:11.616298 kernel: scsi host0: storvsc_host_t Jun 20 19:14:11.616383 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 20 19:14:11.618847 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 19:14:11.619044 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 20 19:14:11.637354 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 20 19:14:11.637427 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 19:14:11.637694 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:14:11.640007 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:14:11.641265 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 20 19:14:11.642616 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 19:14:11.662789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#180 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:14:11.663004 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 20 19:14:11.666985 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 20 19:14:11.685850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#245 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:14:11.922012 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 19:14:11.927896 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:12.119852 kernel: nvme nvme0: using unchecked data buffer Jun 20 19:14:12.257522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 20 19:14:12.301074 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:14:12.304081 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:14:12.328094 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:14:12.338243 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:14:12.350998 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 20 19:14:12.352334 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:14:12.352789 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:14:12.352813 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:14:12.353569 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:14:12.358638 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:14:12.390216 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:14:12.395027 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:12.402858 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:12.641956 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 20 19:14:12.642207 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 20 19:14:12.644904 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 20 19:14:12.646592 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:14:12.651982 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 20 19:14:12.655838 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 20 19:14:12.660852 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 20 19:14:12.662976 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 20 19:14:12.681007 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:14:12.681186 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 20 19:14:12.684890 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 20 19:14:12.689920 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 20 19:14:12.698845 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jun 20 19:14:12.702060 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52656f60 eth0: VF registering: eth1 Jun 20 19:14:12.702243 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jun 20 19:14:12.705848 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jun 20 19:14:13.441585 disk-uuid[674]: The operation has completed successfully. Jun 20 19:14:13.443320 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:13.501237 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:14:13.501332 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:14:13.533819 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:14:13.551222 sh[714]: Success Jun 20 19:14:13.579915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:14:13.580004 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:14:13.580028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:14:13.590850 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:14:13.780254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:14:13.785928 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:14:13.794189 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:14:13.809347 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:14:13.809397 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (727) Jun 20 19:14:13.813852 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:14:13.813899 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:13.814915 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:14:14.034849 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:14:14.039294 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:14:14.042480 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:14:14.043206 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:14:14.051240 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:14:14.099855 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (761) Jun 20 19:14:14.103805 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:14.104057 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:14.104073 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:14:14.131131 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:14:14.137976 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:14:14.144418 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:14.146125 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:14:14.151754 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:14:14.172375 systemd-networkd[894]: lo: Link UP Jun 20 19:14:14.172383 systemd-networkd[894]: lo: Gained carrier Jun 20 19:14:14.174205 systemd-networkd[894]: Enumeration completed Jun 20 19:14:14.174286 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:14:14.174653 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:14.176918 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:14:14.174657 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:14:14.182917 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:14:14.184969 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52656f60 eth0: Data path switched to VF: enP30832s1 Jun 20 19:14:14.185448 systemd-networkd[894]: enP30832s1: Link UP Jun 20 19:14:14.185545 systemd-networkd[894]: eth0: Link UP Jun 20 19:14:14.185687 systemd-networkd[894]: eth0: Gained carrier Jun 20 19:14:14.185696 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:14.191062 systemd[1]: Reached target network.target - Network. Jun 20 19:14:14.191200 systemd-networkd[894]: enP30832s1: Gained carrier Jun 20 19:14:14.205868 systemd-networkd[894]: eth0: DHCPv4 address 10.200.4.5/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:14:14.780093 ignition[897]: Ignition 2.21.0 Jun 20 19:14:14.780106 ignition[897]: Stage: fetch-offline Jun 20 19:14:14.780190 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:14.782680 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:14:14.780197 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:14.780281 ignition[897]: parsed url from cmdline: "" Jun 20 19:14:14.780284 ignition[897]: no config URL provided Jun 20 19:14:14.780288 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:14:14.780293 ignition[897]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:14:14.780298 ignition[897]: failed to fetch config: resource requires networking Jun 20 19:14:14.780549 ignition[897]: Ignition finished successfully Jun 20 19:14:14.796215 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:14:14.817788 ignition[905]: Ignition 2.21.0 Jun 20 19:14:14.817801 ignition[905]: Stage: fetch Jun 20 19:14:14.818037 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:14.818046 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:14.818121 ignition[905]: parsed url from cmdline: "" Jun 20 19:14:14.818124 ignition[905]: no config URL provided Jun 20 19:14:14.818128 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:14:14.818133 ignition[905]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:14:14.818165 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 19:14:14.877010 ignition[905]: GET result: OK Jun 20 19:14:14.877398 ignition[905]: config has been read from IMDS userdata Jun 20 19:14:14.877433 ignition[905]: parsing config with SHA512: 530ccf978d2837a15c685b571d5f26b223f8371f3596ea13f54f428a811993c14ec1469a8eea829f87815a67d5d24f788d5e719ca39dace1dc29c0df8eaaebce Jun 20 19:14:14.884228 unknown[905]: fetched base config from "system" Jun 20 19:14:14.884238 unknown[905]: fetched base config from "system" Jun 20 19:14:14.884593 ignition[905]: fetch: fetch complete Jun 20 19:14:14.884243 unknown[905]: fetched user config from "azure" Jun 20 19:14:14.884598 ignition[905]: fetch: fetch passed Jun 20 19:14:14.887308 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:14:14.884639 ignition[905]: Ignition finished successfully Jun 20 19:14:14.892310 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:14:14.923266 ignition[911]: Ignition 2.21.0 Jun 20 19:14:14.923277 ignition[911]: Stage: kargs Jun 20 19:14:14.925868 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:14:14.923503 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:14.928511 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:14:14.923511 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:14.924424 ignition[911]: kargs: kargs passed Jun 20 19:14:14.924463 ignition[911]: Ignition finished successfully Jun 20 19:14:14.956961 ignition[917]: Ignition 2.21.0 Jun 20 19:14:14.956970 ignition[917]: Stage: disks Jun 20 19:14:14.957189 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:14.959171 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:14:14.957197 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:14.960156 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:14:14.957995 ignition[917]: disks: disks passed Jun 20 19:14:14.960450 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:14:14.958034 ignition[917]: Ignition finished successfully Jun 20 19:14:14.960486 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:14:14.960801 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:14:14.960837 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:14:14.961959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:14:15.038953 systemd-fsck[926]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 19:14:15.044491 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:14:15.049720 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:14:15.275849 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:14:15.276604 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:14:15.278627 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:14:15.297135 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:14:15.311936 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:14:15.317035 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:14:15.323845 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (935) Jun 20 19:14:15.321175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:14:15.321285 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:14:15.333072 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:15.333096 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:15.333114 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:14:15.337320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:14:15.337739 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:14:15.344254 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:14:15.734768 coreos-metadata[937]: Jun 20 19:14:15.734 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:14:15.738967 coreos-metadata[937]: Jun 20 19:14:15.738 INFO Fetch successful Jun 20 19:14:15.741892 coreos-metadata[937]: Jun 20 19:14:15.739 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:14:15.753518 coreos-metadata[937]: Jun 20 19:14:15.753 INFO Fetch successful Jun 20 19:14:15.766284 coreos-metadata[937]: Jun 20 19:14:15.766 INFO wrote hostname ci-4344.1.0-a-657d644de8 to /sysroot/etc/hostname Jun 20 19:14:15.770101 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:14:15.811756 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:14:15.840024 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:14:15.856414 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:14:15.861243 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:14:15.979044 systemd-networkd[894]: enP30832s1: Gained IPv6LL Jun 20 19:14:16.234980 systemd-networkd[894]: eth0: Gained IPv6LL Jun 20 19:14:16.573307 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:14:16.579093 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:14:16.583524 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:14:16.593384 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:14:16.598986 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:16.617408 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:14:16.624151 ignition[1058]: INFO : Ignition 2.21.0 Jun 20 19:14:16.624151 ignition[1058]: INFO : Stage: mount Jun 20 19:14:16.631959 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:16.631959 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:16.631959 ignition[1058]: INFO : mount: mount passed Jun 20 19:14:16.631959 ignition[1058]: INFO : Ignition finished successfully Jun 20 19:14:16.626534 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:14:16.630368 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:14:16.659731 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:14:16.684628 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1070) Jun 20 19:14:16.684751 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:16.686047 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:16.687099 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:14:16.692372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:14:16.717736 ignition[1087]: INFO : Ignition 2.21.0 Jun 20 19:14:16.717736 ignition[1087]: INFO : Stage: files Jun 20 19:14:16.720249 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:16.720249 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:16.720249 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:14:16.731955 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:14:16.731955 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:14:16.769099 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:14:16.771004 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:14:16.771004 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:14:16.769487 unknown[1087]: wrote ssh authorized keys file for user: core Jun 20 19:14:16.777754 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:14:16.777754 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 20 19:14:16.804351 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:14:16.901379 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:14:16.901379 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:14:16.909326 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:14:16.909326 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:14:16.909326 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:14:16.909326 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:14:16.909326 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:14:16.909326 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:14:16.909326 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:14:16.931941 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:14:16.931941 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:14:16.931941 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:14:16.931941 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:14:16.931941 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:14:16.931941 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 20 19:14:17.836319 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:14:21.265486 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:14:21.265486 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:14:21.289302 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:14:21.297331 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:14:21.297331 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:14:21.297331 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:14:21.310428 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:14:21.310428 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:14:21.310428 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:14:21.310428 ignition[1087]: INFO : files: files passed Jun 20 19:14:21.310428 ignition[1087]: INFO : Ignition finished successfully Jun 20 19:14:21.300285 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:14:21.306657 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:14:21.318087 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:14:21.327335 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:14:21.327412 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:14:21.348594 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:14:21.348594 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:14:21.353740 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:14:21.356739 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:14:21.361981 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:14:21.367080 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:14:21.405301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:14:21.405405 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:14:21.410288 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:14:21.411243 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:14:21.411366 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:14:21.412941 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:14:21.450120 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:14:21.454528 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:14:21.475242 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:14:21.477209 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:14:21.477511 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:14:21.478190 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:14:21.478318 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:14:21.492593 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:14:21.496983 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:14:21.498438 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:14:21.500647 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:14:21.503459 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:14:21.503803 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:14:21.504415 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:14:21.504662 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:14:21.514018 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:14:21.522034 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:14:21.524820 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:14:21.527487 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:14:21.527636 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:14:21.534912 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:14:21.536530 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:14:21.538046 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:14:21.538346 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:14:21.538701 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:14:21.538822 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:14:21.539523 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:14:21.539621 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:14:21.539982 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:14:21.540092 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:14:21.540592 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:14:21.540691 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:14:21.543035 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:14:21.543351 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:14:21.543487 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:14:21.548034 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:14:21.548324 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:14:21.548456 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:14:21.548802 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:14:21.548929 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:14:21.553744 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:14:21.553819 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:14:21.637626 ignition[1140]: INFO : Ignition 2.21.0 Jun 20 19:14:21.637626 ignition[1140]: INFO : Stage: umount Jun 20 19:14:21.641336 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:21.641336 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:21.641336 ignition[1140]: INFO : umount: umount passed Jun 20 19:14:21.641336 ignition[1140]: INFO : Ignition finished successfully Jun 20 19:14:21.640554 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:14:21.640667 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:14:21.642802 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:14:21.646233 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:14:21.648727 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:14:21.648771 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:14:21.651953 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:14:21.651990 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:14:21.657102 systemd[1]: Stopped target network.target - Network. Jun 20 19:14:21.660558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:14:21.660624 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:14:21.665603 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:14:21.669052 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:14:21.670945 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:14:21.674263 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:14:21.677204 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:14:21.678971 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:14:21.679015 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:14:21.683938 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:14:21.683977 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:14:21.687906 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:14:21.687965 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:14:21.690961 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:14:21.690999 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:14:21.696027 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:14:21.699151 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:14:21.707105 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:14:21.707755 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:14:21.707970 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:14:21.724399 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:14:21.724558 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:14:21.724636 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:14:21.731119 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:14:21.731327 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:14:21.731407 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:14:21.734469 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:14:21.737138 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:14:21.737188 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:14:21.741892 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:14:21.741948 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:14:21.745646 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:14:21.753882 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:14:21.753938 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:14:21.757297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:14:21.757335 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:14:21.762530 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:14:21.762573 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:14:21.765694 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:14:21.765749 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:14:21.771290 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:14:21.792194 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:14:21.792263 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:14:21.792600 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:14:21.792725 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:14:21.798463 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:14:21.798526 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:14:21.804930 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:14:21.804965 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:14:21.807900 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:14:21.807947 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:14:21.820421 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:14:21.832405 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52656f60 eth0: Data path switched from VF: enP30832s1 Jun 20 19:14:21.832619 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:14:21.820478 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:14:21.820860 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:14:21.820894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:14:21.823951 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:14:21.824082 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:14:21.824129 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:14:21.827212 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:14:21.827261 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:14:21.839756 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:14:21.839805 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:21.856848 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:14:21.856902 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:14:21.856939 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:14:21.857273 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:14:21.857381 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:14:21.862108 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:14:21.862186 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:14:21.863523 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:14:21.864972 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:14:21.885656 systemd[1]: Switching root. Jun 20 19:14:21.932922 systemd-journald[205]: Journal stopped Jun 20 19:14:25.262140 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jun 20 19:14:25.262174 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:14:25.262187 kernel: SELinux: policy capability open_perms=1 Jun 20 19:14:25.262197 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:14:25.262206 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:14:25.262215 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:14:25.262228 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:14:25.262237 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:14:25.262247 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:14:25.262257 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:14:25.262267 kernel: audit: type=1403 audit(1750446863.155:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:14:25.262279 systemd[1]: Successfully loaded SELinux policy in 108.761ms. Jun 20 19:14:25.262291 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.111ms. Jun 20 19:14:25.262306 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:14:25.262318 systemd[1]: Detected virtualization microsoft. Jun 20 19:14:25.262329 systemd[1]: Detected architecture x86-64. Jun 20 19:14:25.262339 systemd[1]: Detected first boot. Jun 20 19:14:25.262351 systemd[1]: Hostname set to . Jun 20 19:14:25.262363 systemd[1]: Initializing machine ID from random generator. Jun 20 19:14:25.262372 zram_generator::config[1183]: No configuration found. Jun 20 19:14:25.262383 kernel: Guest personality initialized and is inactive Jun 20 19:14:25.262392 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 20 19:14:25.262401 kernel: Initialized host personality Jun 20 19:14:25.262409 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:14:25.262419 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:14:25.262432 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:14:25.262443 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:14:25.262453 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:14:25.262463 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:14:25.262474 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:14:25.262485 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:14:25.262495 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:14:25.262508 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:14:25.262519 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:14:25.262530 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:14:25.262540 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:14:25.262551 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:14:25.262562 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:14:25.262573 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:14:25.262584 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:14:25.262598 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:14:25.262611 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:14:25.262623 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:14:25.262636 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:14:25.262647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:14:25.262658 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:14:25.262669 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:14:25.262680 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:14:25.262694 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:14:25.262705 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:14:25.262716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:14:25.262727 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:14:25.262738 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:14:25.262749 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:14:25.262761 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:14:25.262772 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:14:25.262785 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:14:25.262796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:14:25.262807 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:14:25.262818 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:14:25.265245 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:14:25.265264 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:14:25.265276 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:14:25.265287 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:14:25.265300 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:14:25.265371 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:14:25.265383 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:14:25.265395 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:14:25.265407 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:14:25.265420 systemd[1]: Reached target machines.target - Containers. Jun 20 19:14:25.265431 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:14:25.265442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:14:25.265452 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:14:25.265463 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:14:25.265473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:14:25.265484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:14:25.265495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:14:25.265507 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:14:25.265520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:14:25.265532 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:14:25.265543 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:14:25.265555 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:14:25.265566 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:14:25.265578 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:14:25.265590 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:14:25.265602 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:14:25.265615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:14:25.265627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:14:25.265639 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:14:25.265651 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:14:25.265662 kernel: fuse: init (API version 7.41) Jun 20 19:14:25.265673 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:14:25.265684 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:14:25.265696 systemd[1]: Stopped verity-setup.service. Jun 20 19:14:25.265710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:14:25.265721 kernel: loop: module loaded Jun 20 19:14:25.265732 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:14:25.265743 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:14:25.265754 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:14:25.265766 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:14:25.265777 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:14:25.265788 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:14:25.265800 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:14:25.265813 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:14:25.265850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:14:25.265864 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:14:25.265875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:14:25.265887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:14:25.265899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:14:25.265910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:14:25.265948 systemd-journald[1276]: Collecting audit messages is disabled. Jun 20 19:14:25.265979 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:14:25.265991 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:14:25.266004 systemd-journald[1276]: Journal started Jun 20 19:14:25.266033 systemd-journald[1276]: Runtime Journal (/run/log/journal/53fb128686794ab59cc30fba2aed926f) is 8M, max 158.9M, 150.9M free. Jun 20 19:14:24.791223 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:14:24.801546 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 19:14:24.802021 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:14:25.271955 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:14:25.275329 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:14:25.275595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:14:25.278411 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:14:25.283293 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:14:25.286208 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:14:25.291326 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:14:25.294423 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:14:25.308528 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:14:25.313120 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:14:25.320960 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:14:25.322892 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:14:25.322994 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:14:25.329995 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:14:25.331667 kernel: ACPI: bus type drm_connector registered Jun 20 19:14:25.335384 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:14:25.337739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:14:25.339313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:14:25.342913 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:14:25.345275 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:14:25.348296 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:14:25.351153 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:14:25.352618 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:14:25.356949 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:14:25.365965 systemd-journald[1276]: Time spent on flushing to /var/log/journal/53fb128686794ab59cc30fba2aed926f is 48.068ms for 982 entries. Jun 20 19:14:25.365965 systemd-journald[1276]: System Journal (/var/log/journal/53fb128686794ab59cc30fba2aed926f) is 11.8M, max 2.6G, 2.6G free. Jun 20 19:14:25.452859 systemd-journald[1276]: Received client request to flush runtime journal. Jun 20 19:14:25.452934 systemd-journald[1276]: /var/log/journal/53fb128686794ab59cc30fba2aed926f/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 20 19:14:25.452960 systemd-journald[1276]: Rotating system journal. Jun 20 19:14:25.452977 kernel: loop0: detected capacity change from 0 to 113872 Jun 20 19:14:25.370911 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:14:25.375795 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:14:25.376100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:14:25.378902 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:14:25.385085 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:14:25.392407 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:14:25.401059 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:14:25.406978 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:14:25.451206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:14:25.453987 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:14:25.458555 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:14:25.660042 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:14:25.662502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:14:25.735853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:14:25.741692 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jun 20 19:14:25.741709 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jun 20 19:14:25.749173 kernel: loop1: detected capacity change from 0 to 221472 Jun 20 19:14:25.749319 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:14:25.797853 kernel: loop2: detected capacity change from 0 to 146240 Jun 20 19:14:25.803438 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:14:26.087865 kernel: loop3: detected capacity change from 0 to 28496 Jun 20 19:14:26.146950 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:14:26.152270 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:14:26.185327 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Jun 20 19:14:26.327265 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:14:26.332982 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:14:26.368888 kernel: loop4: detected capacity change from 0 to 113872 Jun 20 19:14:26.389859 kernel: loop5: detected capacity change from 0 to 221472 Jun 20 19:14:26.399695 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:14:26.423128 kernel: loop6: detected capacity change from 0 to 146240 Jun 20 19:14:26.424396 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:14:26.465524 kernel: loop7: detected capacity change from 0 to 28496 Jun 20 19:14:26.484577 (sd-merge)[1376]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 19:14:26.488813 (sd-merge)[1376]: Merged extensions into '/usr'. Jun 20 19:14:26.497930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#139 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:14:26.501287 systemd[1]: Reload requested from client PID 1324 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:14:26.501302 systemd[1]: Reloading... Jun 20 19:14:26.567851 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:14:26.571897 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 19:14:26.609641 kernel: hv_vmbus: registering driver hv_balloon Jun 20 19:14:26.643854 zram_generator::config[1432]: No configuration found. Jun 20 19:14:26.660850 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 19:14:26.680862 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 19:14:26.684570 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:14:26.691629 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:14:26.744863 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 19:14:26.794393 systemd-networkd[1354]: lo: Link UP Jun 20 19:14:26.795780 systemd-networkd[1354]: lo: Gained carrier Jun 20 19:14:26.804576 systemd-networkd[1354]: Enumeration completed Jun 20 19:14:26.807326 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:26.807593 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:14:26.813040 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:14:26.826202 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:14:26.830929 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52656f60 eth0: Data path switched to VF: enP30832s1 Jun 20 19:14:26.831812 systemd-networkd[1354]: enP30832s1: Link UP Jun 20 19:14:26.832028 systemd-networkd[1354]: eth0: Link UP Jun 20 19:14:26.832082 systemd-networkd[1354]: eth0: Gained carrier Jun 20 19:14:26.832137 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:26.837152 systemd-networkd[1354]: enP30832s1: Gained carrier Jun 20 19:14:26.843067 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.4.5/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:14:26.881356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:27.033303 systemd[1]: Reloading finished in 531 ms. Jun 20 19:14:27.052909 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:14:27.056073 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:14:27.057678 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:14:27.107402 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:14:27.118915 systemd[1]: Starting ensure-sysext.service... Jun 20 19:14:27.122128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:14:27.129150 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:14:27.137063 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:14:27.144331 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:14:27.152820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:27.180623 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 20 19:14:27.180901 systemd[1]: Reload requested from client PID 1518 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:14:27.180922 systemd[1]: Reloading... Jun 20 19:14:27.191328 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:14:27.191359 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:14:27.191577 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:14:27.191791 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:14:27.192480 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:14:27.192710 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jun 20 19:14:27.192754 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jun 20 19:14:27.218352 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:14:27.218362 systemd-tmpfiles[1522]: Skipping /boot Jun 20 19:14:27.239101 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:14:27.239116 systemd-tmpfiles[1522]: Skipping /boot Jun 20 19:14:27.274012 zram_generator::config[1564]: No configuration found. Jun 20 19:14:27.364379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:27.468586 systemd[1]: Reloading finished in 287 ms. Jun 20 19:14:27.491580 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:14:27.494538 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:14:27.496695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:14:27.500272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:27.509664 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:14:27.514051 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:14:27.519146 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:14:27.524067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:14:27.533048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:14:27.541165 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:14:27.541396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:14:27.545273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:14:27.551040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:14:27.557896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:14:27.559579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:14:27.559704 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:14:27.559800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:14:27.572741 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:14:27.573156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:14:27.577048 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:14:27.581261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:14:27.581395 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:14:27.581565 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:14:27.587744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:14:27.588903 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:14:27.594258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:14:27.594419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:14:27.597692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:14:27.597923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:14:27.602336 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:14:27.602560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:14:27.606396 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:14:27.606564 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:14:27.609612 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:14:27.622785 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:14:27.625361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:14:27.625918 systemd[1]: Finished ensure-sysext.service. Jun 20 19:14:27.655628 systemd-resolved[1626]: Positive Trust Anchors: Jun 20 19:14:27.655638 systemd-resolved[1626]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:14:27.655667 systemd-resolved[1626]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:14:27.670649 systemd-resolved[1626]: Using system hostname 'ci-4344.1.0-a-657d644de8'. Jun 20 19:14:27.672406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:14:27.674095 systemd[1]: Reached target network.target - Network. Jun 20 19:14:27.676969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:14:27.682947 augenrules[1659]: No rules Jun 20 19:14:27.684037 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:14:27.684238 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:14:27.890620 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:14:27.894098 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:14:28.586987 systemd-networkd[1354]: eth0: Gained IPv6LL Jun 20 19:14:28.589105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:14:28.593217 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:14:28.651022 systemd-networkd[1354]: enP30832s1: Gained IPv6LL Jun 20 19:14:28.876553 ldconfig[1319]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:14:28.886300 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:14:28.891193 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:14:28.912586 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:14:28.916109 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:14:28.917385 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:14:28.920914 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:14:28.923873 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:14:28.927002 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:14:28.929943 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:14:28.932885 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:14:28.935895 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:14:28.935935 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:14:28.937021 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:14:28.951288 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:14:28.954088 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:14:28.959046 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:14:28.960916 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:14:28.963925 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:14:28.977417 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:14:28.979161 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:14:28.981393 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:14:28.985635 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:14:28.986960 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:14:28.988249 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:14:28.988272 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:14:28.990685 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 19:14:28.994859 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:14:29.002032 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:14:29.006941 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:14:29.011000 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:14:29.019131 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:14:29.022954 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:14:29.024781 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:14:29.027262 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:14:29.030914 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 20 19:14:29.034546 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 19:14:29.038993 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 19:14:29.041423 jq[1677]: false Jun 20 19:14:29.042118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:29.046314 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:14:29.053016 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:14:29.058285 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:14:29.060792 KVP[1683]: KVP starting; pid is:1683 Jun 20 19:14:29.069063 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:14:29.075380 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:14:29.077691 KVP[1683]: KVP LIC Version: 3.1 Jun 20 19:14:29.077847 kernel: hv_utils: KVP IC version 4.0 Jun 20 19:14:29.080289 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Refreshing passwd entry cache Jun 20 19:14:29.081995 oslogin_cache_refresh[1682]: Refreshing passwd entry cache Jun 20 19:14:29.083593 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:14:29.086252 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:14:29.087151 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:14:29.088527 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:14:29.093114 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:14:29.094751 (chronyd)[1672]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 19:14:29.101612 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:14:29.104525 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Failure getting users, quitting Jun 20 19:14:29.104525 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:14:29.104525 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Refreshing group entry cache Jun 20 19:14:29.104053 oslogin_cache_refresh[1682]: Failure getting users, quitting Jun 20 19:14:29.104071 oslogin_cache_refresh[1682]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:14:29.104111 oslogin_cache_refresh[1682]: Refreshing group entry cache Jun 20 19:14:29.106372 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:14:29.107878 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:14:29.111900 extend-filesystems[1681]: Found /dev/nvme0n1p6 Jun 20 19:14:29.125100 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:14:29.127042 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:14:29.134313 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:14:29.134554 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:14:29.135441 extend-filesystems[1681]: Found /dev/nvme0n1p9 Jun 20 19:14:29.142428 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Failure getting groups, quitting Jun 20 19:14:29.142428 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:14:29.138286 oslogin_cache_refresh[1682]: Failure getting groups, quitting Jun 20 19:14:29.138300 oslogin_cache_refresh[1682]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:14:29.140858 chronyd[1713]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 19:14:29.143816 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:14:29.145109 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:14:29.147902 jq[1697]: true Jun 20 19:14:29.148575 extend-filesystems[1681]: Checking size of /dev/nvme0n1p9 Jun 20 19:14:29.159853 chronyd[1713]: Timezone right/UTC failed leap second check, ignoring Jun 20 19:14:29.160071 chronyd[1713]: Loaded seccomp filter (level 2) Jun 20 19:14:29.168677 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 19:14:29.178699 update_engine[1696]: I20250620 19:14:29.178615 1696 main.cc:92] Flatcar Update Engine starting Jun 20 19:14:29.186846 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:14:29.201092 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:14:29.201311 extend-filesystems[1681]: Old size kept for /dev/nvme0n1p9 Jun 20 19:14:29.205227 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:14:29.205436 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:14:29.216325 tar[1704]: linux-amd64/helm Jun 20 19:14:29.218449 jq[1723]: true Jun 20 19:14:29.269964 dbus-daemon[1675]: [system] SELinux support is enabled Jun 20 19:14:29.270137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:14:29.281754 update_engine[1696]: I20250620 19:14:29.276906 1696 update_check_scheduler.cc:74] Next update check in 4m39s Jun 20 19:14:29.278741 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:14:29.278767 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:14:29.285946 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:14:29.285972 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:14:29.288285 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:14:29.292190 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:14:29.306663 systemd-logind[1695]: New seat seat0. Jun 20 19:14:29.313286 systemd-logind[1695]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:14:29.313542 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:14:29.343096 bash[1757]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:14:29.344588 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:14:29.350067 coreos-metadata[1674]: Jun 20 19:14:29.346 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:14:29.349464 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:14:29.356184 coreos-metadata[1674]: Jun 20 19:14:29.355 INFO Fetch successful Jun 20 19:14:29.356184 coreos-metadata[1674]: Jun 20 19:14:29.355 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 19:14:29.361433 coreos-metadata[1674]: Jun 20 19:14:29.361 INFO Fetch successful Jun 20 19:14:29.361899 coreos-metadata[1674]: Jun 20 19:14:29.361 INFO Fetching http://168.63.129.16/machine/4635ea0b-c7d8-4e3c-bcc1-76086add62d9/2a1498a4%2Dd7cc%2D43c5%2Da862%2D975a6d5ba020.%5Fci%2D4344.1.0%2Da%2D657d644de8?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 19:14:29.365222 coreos-metadata[1674]: Jun 20 19:14:29.364 INFO Fetch successful Jun 20 19:14:29.365222 coreos-metadata[1674]: Jun 20 19:14:29.364 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:14:29.381945 coreos-metadata[1674]: Jun 20 19:14:29.381 INFO Fetch successful Jun 20 19:14:29.459894 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:14:29.463496 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:14:29.637894 locksmithd[1763]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:14:29.643401 sshd_keygen[1731]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:14:29.714378 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:14:29.721987 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:14:29.727030 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 19:14:29.760146 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:14:29.760568 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:14:29.768853 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:14:29.787409 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 19:14:29.808581 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:14:29.815436 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:14:29.818860 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:14:29.821460 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:14:29.939800 containerd[1726]: time="2025-06-20T19:14:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:14:29.945849 containerd[1726]: time="2025-06-20T19:14:29.945055610Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:14:29.970390 containerd[1726]: time="2025-06-20T19:14:29.970341533Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.914µs" Jun 20 19:14:29.970521 containerd[1726]: time="2025-06-20T19:14:29.970507518Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:14:29.970572 containerd[1726]: time="2025-06-20T19:14:29.970563161Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:14:29.970764 containerd[1726]: time="2025-06-20T19:14:29.970754479Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:14:29.970892 containerd[1726]: time="2025-06-20T19:14:29.970881357Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:14:29.970954 containerd[1726]: time="2025-06-20T19:14:29.970946258Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:14:29.971042 containerd[1726]: time="2025-06-20T19:14:29.971031503Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:14:29.971076 containerd[1726]: time="2025-06-20T19:14:29.971068677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:14:29.971875 containerd[1726]: time="2025-06-20T19:14:29.971850933Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:14:29.972076 containerd[1726]: time="2025-06-20T19:14:29.972064985Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:14:29.972120 containerd[1726]: time="2025-06-20T19:14:29.972111941Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:14:29.972155 containerd[1726]: time="2025-06-20T19:14:29.972147489Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:14:29.972271 containerd[1726]: time="2025-06-20T19:14:29.972262078Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:14:29.972502 containerd[1726]: time="2025-06-20T19:14:29.972491337Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:14:29.973060 containerd[1726]: time="2025-06-20T19:14:29.973047129Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:14:29.973107 containerd[1726]: time="2025-06-20T19:14:29.973099299Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:14:29.973184 containerd[1726]: time="2025-06-20T19:14:29.973174286Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:14:29.973567 containerd[1726]: time="2025-06-20T19:14:29.973547140Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:14:29.973662 containerd[1726]: time="2025-06-20T19:14:29.973654630Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:14:29.994751 containerd[1726]: time="2025-06-20T19:14:29.994701948Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:14:29.994945 containerd[1726]: time="2025-06-20T19:14:29.994929673Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996247420Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996273310Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996291260Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996304017Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996317741Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996330756Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996343562Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996354721Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996366974Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996381310Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996508857Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996527968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996550301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:14:29.997891 containerd[1726]: time="2025-06-20T19:14:29.996564202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996575752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996587396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996599843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996610020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996622909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996634063Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996645483Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996719797Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996735232Z" level=info msg="Start snapshots syncer" Jun 20 19:14:29.998256 containerd[1726]: time="2025-06-20T19:14:29.996757524Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:14:29.998503 containerd[1726]: time="2025-06-20T19:14:29.997047802Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:14:29.998503 containerd[1726]: time="2025-06-20T19:14:29.997097774Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997178136Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997287143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997308367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997322031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997332972Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997344948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997355808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997366194Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997390597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997402701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997418504Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997445780Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997459739Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:14:29.998639 containerd[1726]: time="2025-06-20T19:14:29.997469916Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997480023Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997487818Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997497607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997507681Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997524208Z" level=info msg="runtime interface created" Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997529243Z" level=info msg="created NRI interface" Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997537288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997548636Z" level=info msg="Connect containerd service" Jun 20 19:14:29.998947 containerd[1726]: time="2025-06-20T19:14:29.997571420Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:14:30.002851 containerd[1726]: time="2025-06-20T19:14:30.002639350Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:14:30.089605 tar[1704]: linux-amd64/LICENSE Jun 20 19:14:30.089605 tar[1704]: linux-amd64/README.md Jun 20 19:14:30.104808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482257695Z" level=info msg="Start subscribing containerd event" Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482322058Z" level=info msg="Start recovering state" Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482432653Z" level=info msg="Start event monitor" Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482449265Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482457076Z" level=info msg="Start streaming server" Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482467175Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482474626Z" level=info msg="runtime interface starting up..." Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482482527Z" level=info msg="starting plugins..." Jun 20 19:14:30.482603 containerd[1726]: time="2025-06-20T19:14:30.482493671Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:14:30.483196 containerd[1726]: time="2025-06-20T19:14:30.483171109Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:14:30.483279 containerd[1726]: time="2025-06-20T19:14:30.483216732Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:14:30.483390 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:14:30.486445 containerd[1726]: time="2025-06-20T19:14:30.486249214Z" level=info msg="containerd successfully booted in 0.546889s" Jun 20 19:14:30.655369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:30.658231 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:14:30.662191 systemd[1]: Startup finished in 3.303s (kernel) + 12.720s (initrd) + 7.613s (userspace) = 23.637s. Jun 20 19:14:30.668266 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:30.801506 login[1820]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:14:30.803751 login[1821]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:14:30.817687 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:14:30.819049 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:14:30.828898 systemd-logind[1695]: New session 2 of user core. Jun 20 19:14:30.833590 systemd-logind[1695]: New session 1 of user core. Jun 20 19:14:30.847211 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:14:30.850673 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:14:30.864316 (systemd)[1855]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:14:30.868923 systemd-logind[1695]: New session c1 of user core. Jun 20 19:14:31.086014 systemd[1855]: Queued start job for default target default.target. Jun 20 19:14:31.091880 systemd[1855]: Created slice app.slice - User Application Slice. Jun 20 19:14:31.092520 systemd[1855]: Reached target paths.target - Paths. Jun 20 19:14:31.092717 systemd[1855]: Reached target timers.target - Timers. Jun 20 19:14:31.095947 systemd[1855]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:14:31.106686 systemd[1855]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:14:31.107527 systemd[1855]: Reached target sockets.target - Sockets. Jun 20 19:14:31.107567 systemd[1855]: Reached target basic.target - Basic System. Jun 20 19:14:31.107813 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:14:31.108052 systemd[1855]: Reached target default.target - Main User Target. Jun 20 19:14:31.108094 systemd[1855]: Startup finished in 231ms. Jun 20 19:14:31.115172 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:14:31.116662 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:14:31.148943 waagent[1817]: 2025-06-20T19:14:31.148865Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 19:14:31.149928 waagent[1817]: 2025-06-20T19:14:31.149355Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 19:14:31.151024 waagent[1817]: 2025-06-20T19:14:31.150759Z INFO Daemon Daemon Python: 3.11.12 Jun 20 19:14:31.154939 waagent[1817]: 2025-06-20T19:14:31.154758Z INFO Daemon Daemon Run daemon Jun 20 19:14:31.158850 waagent[1817]: 2025-06-20T19:14:31.156556Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 19:14:31.161847 waagent[1817]: 2025-06-20T19:14:31.160064Z INFO Daemon Daemon Using waagent for provisioning Jun 20 19:14:31.163314 waagent[1817]: 2025-06-20T19:14:31.162602Z INFO Daemon Daemon Activate resource disk Jun 20 19:14:31.164666 waagent[1817]: 2025-06-20T19:14:31.164532Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 19:14:31.167248 waagent[1817]: 2025-06-20T19:14:31.167204Z INFO Daemon Daemon Found device: None Jun 20 19:14:31.167450 waagent[1817]: 2025-06-20T19:14:31.167428Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 19:14:31.169854 waagent[1817]: 2025-06-20T19:14:31.169072Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 19:14:31.172361 waagent[1817]: 2025-06-20T19:14:31.172312Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:14:31.173043 waagent[1817]: 2025-06-20T19:14:31.172782Z INFO Daemon Daemon Running default provisioning handler Jun 20 19:14:31.186856 waagent[1817]: 2025-06-20T19:14:31.186434Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 19:14:31.188364 waagent[1817]: 2025-06-20T19:14:31.188326Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 19:14:31.188661 waagent[1817]: 2025-06-20T19:14:31.188638Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 19:14:31.190841 waagent[1817]: 2025-06-20T19:14:31.189985Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 19:14:31.232298 waagent[1817]: 2025-06-20T19:14:31.232215Z INFO Daemon Daemon Successfully mounted dvd Jun 20 19:14:31.259929 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 19:14:31.263238 waagent[1817]: 2025-06-20T19:14:31.263184Z INFO Daemon Daemon Detect protocol endpoint Jun 20 19:14:31.263546 waagent[1817]: 2025-06-20T19:14:31.263519Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:14:31.263721 waagent[1817]: 2025-06-20T19:14:31.263702Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 19:14:31.263968 waagent[1817]: 2025-06-20T19:14:31.263945Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 19:14:31.264271 waagent[1817]: 2025-06-20T19:14:31.264252Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 19:14:31.264460 waagent[1817]: 2025-06-20T19:14:31.264445Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 19:14:31.277719 waagent[1817]: 2025-06-20T19:14:31.277690Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 19:14:31.280802 waagent[1817]: 2025-06-20T19:14:31.280183Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 19:14:31.280802 waagent[1817]: 2025-06-20T19:14:31.280314Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 19:14:31.439329 waagent[1817]: 2025-06-20T19:14:31.439230Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 19:14:31.439709 waagent[1817]: 2025-06-20T19:14:31.439550Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 19:14:31.446862 waagent[1817]: 2025-06-20T19:14:31.446191Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:14:31.461822 waagent[1817]: 2025-06-20T19:14:31.461777Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 19:14:31.463610 waagent[1817]: 2025-06-20T19:14:31.463558Z INFO Daemon Jun 20 19:14:31.465857 waagent[1817]: 2025-06-20T19:14:31.463725Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 644b429b-992e-404c-be21-c321d84a670d eTag: 15625139095002204538 source: Fabric] Jun 20 19:14:31.465857 waagent[1817]: 2025-06-20T19:14:31.464342Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 19:14:31.465857 waagent[1817]: 2025-06-20T19:14:31.464623Z INFO Daemon Jun 20 19:14:31.465857 waagent[1817]: 2025-06-20T19:14:31.464774Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:14:31.471738 waagent[1817]: 2025-06-20T19:14:31.471700Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 19:14:31.491283 kubelet[1844]: E0620 19:14:31.491207 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:31.493392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:31.493653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:31.494068 systemd[1]: kubelet.service: Consumed 1.018s CPU time, 266.8M memory peak. Jun 20 19:14:31.547610 waagent[1817]: 2025-06-20T19:14:31.547538Z INFO Daemon Downloaded certificate {'thumbprint': '54315AA38DC1021F13EDB4FB6D3E3CD0EE84A5F6', 'hasPrivateKey': True} Jun 20 19:14:31.550203 waagent[1817]: 2025-06-20T19:14:31.550160Z INFO Daemon Fetch goal state completed Jun 20 19:14:31.558266 waagent[1817]: 2025-06-20T19:14:31.558218Z INFO Daemon Daemon Starting provisioning Jun 20 19:14:31.559354 waagent[1817]: 2025-06-20T19:14:31.558658Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 19:14:31.559354 waagent[1817]: 2025-06-20T19:14:31.558959Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-657d644de8] Jun 20 19:14:31.573464 waagent[1817]: 2025-06-20T19:14:31.573414Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-657d644de8] Jun 20 19:14:31.575082 waagent[1817]: 2025-06-20T19:14:31.574086Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 19:14:31.575082 waagent[1817]: 2025-06-20T19:14:31.574434Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 19:14:31.582679 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:31.582687 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:14:31.582720 systemd-networkd[1354]: eth0: DHCP lease lost Jun 20 19:14:31.583733 waagent[1817]: 2025-06-20T19:14:31.583681Z INFO Daemon Daemon Create user account if not exists Jun 20 19:14:31.584697 waagent[1817]: 2025-06-20T19:14:31.584276Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 19:14:31.584697 waagent[1817]: 2025-06-20T19:14:31.584454Z INFO Daemon Daemon Configure sudoer Jun 20 19:14:31.592849 waagent[1817]: 2025-06-20T19:14:31.592746Z INFO Daemon Daemon Configure sshd Jun 20 19:14:31.597278 waagent[1817]: 2025-06-20T19:14:31.597233Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 19:14:31.600143 waagent[1817]: 2025-06-20T19:14:31.599449Z INFO Daemon Daemon Deploy ssh public key. Jun 20 19:14:31.602896 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.4.5/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:14:32.687647 waagent[1817]: 2025-06-20T19:14:32.687605Z INFO Daemon Daemon Provisioning complete Jun 20 19:14:32.705467 waagent[1817]: 2025-06-20T19:14:32.705421Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 19:14:32.710590 waagent[1817]: 2025-06-20T19:14:32.705685Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 19:14:32.710590 waagent[1817]: 2025-06-20T19:14:32.705916Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 19:14:32.814304 waagent[1908]: 2025-06-20T19:14:32.814218Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 19:14:32.814648 waagent[1908]: 2025-06-20T19:14:32.814353Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 19:14:32.814648 waagent[1908]: 2025-06-20T19:14:32.814394Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 19:14:32.814648 waagent[1908]: 2025-06-20T19:14:32.814434Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 20 19:14:32.876550 waagent[1908]: 2025-06-20T19:14:32.876466Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 19:14:32.876737 waagent[1908]: 2025-06-20T19:14:32.876711Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:14:32.876790 waagent[1908]: 2025-06-20T19:14:32.876769Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:14:32.885751 waagent[1908]: 2025-06-20T19:14:32.885691Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:14:32.892324 waagent[1908]: 2025-06-20T19:14:32.892285Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 19:14:32.892703 waagent[1908]: 2025-06-20T19:14:32.892671Z INFO ExtHandler Jun 20 19:14:32.892752 waagent[1908]: 2025-06-20T19:14:32.892729Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6d6c7d7f-160f-4a98-84d0-3dd20ab758ce eTag: 15625139095002204538 source: Fabric] Jun 20 19:14:32.892996 waagent[1908]: 2025-06-20T19:14:32.892968Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 19:14:32.893362 waagent[1908]: 2025-06-20T19:14:32.893333Z INFO ExtHandler Jun 20 19:14:32.893400 waagent[1908]: 2025-06-20T19:14:32.893376Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:14:32.898139 waagent[1908]: 2025-06-20T19:14:32.898110Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 19:14:32.967368 waagent[1908]: 2025-06-20T19:14:32.967247Z INFO ExtHandler Downloaded certificate {'thumbprint': '54315AA38DC1021F13EDB4FB6D3E3CD0EE84A5F6', 'hasPrivateKey': True} Jun 20 19:14:32.967744 waagent[1908]: 2025-06-20T19:14:32.967712Z INFO ExtHandler Fetch goal state completed Jun 20 19:14:32.981388 waagent[1908]: 2025-06-20T19:14:32.981321Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 19:14:32.986048 waagent[1908]: 2025-06-20T19:14:32.986000Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1908 Jun 20 19:14:32.986181 waagent[1908]: 2025-06-20T19:14:32.986157Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 19:14:32.986440 waagent[1908]: 2025-06-20T19:14:32.986417Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 19:14:32.987544 waagent[1908]: 2025-06-20T19:14:32.987504Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 19:14:32.987892 waagent[1908]: 2025-06-20T19:14:32.987823Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 19:14:32.988021 waagent[1908]: 2025-06-20T19:14:32.987997Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 19:14:32.988415 waagent[1908]: 2025-06-20T19:14:32.988388Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 19:14:33.004412 waagent[1908]: 2025-06-20T19:14:33.004377Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 19:14:33.004584 waagent[1908]: 2025-06-20T19:14:33.004563Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 19:14:33.010238 waagent[1908]: 2025-06-20T19:14:33.010200Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 19:14:33.015874 systemd[1]: Reload requested from client PID 1923 ('systemctl') (unit waagent.service)... Jun 20 19:14:33.015888 systemd[1]: Reloading... Jun 20 19:14:33.099860 zram_generator::config[1970]: No configuration found. Jun 20 19:14:33.176171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:33.276398 systemd[1]: Reloading finished in 260 ms. Jun 20 19:14:33.292776 waagent[1908]: 2025-06-20T19:14:33.292463Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 19:14:33.292776 waagent[1908]: 2025-06-20T19:14:33.292629Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 19:14:33.325911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#195 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:14:33.563114 waagent[1908]: 2025-06-20T19:14:33.562978Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 19:14:33.563357 waagent[1908]: 2025-06-20T19:14:33.563329Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 19:14:33.564026 waagent[1908]: 2025-06-20T19:14:33.563987Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 19:14:33.564419 waagent[1908]: 2025-06-20T19:14:33.564391Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:14:33.564530 waagent[1908]: 2025-06-20T19:14:33.564503Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:14:33.564574 waagent[1908]: 2025-06-20T19:14:33.564545Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 19:14:33.564770 waagent[1908]: 2025-06-20T19:14:33.564730Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 19:14:33.564938 waagent[1908]: 2025-06-20T19:14:33.564900Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 19:14:33.564998 waagent[1908]: 2025-06-20T19:14:33.564975Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:14:33.565202 waagent[1908]: 2025-06-20T19:14:33.565180Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 19:14:33.565251 waagent[1908]: 2025-06-20T19:14:33.565229Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:14:33.565645 waagent[1908]: 2025-06-20T19:14:33.565607Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 19:14:33.565843 waagent[1908]: 2025-06-20T19:14:33.565792Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 19:14:33.566191 waagent[1908]: 2025-06-20T19:14:33.566164Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 19:14:33.566191 waagent[1908]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 19:14:33.566191 waagent[1908]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 19:14:33.566191 waagent[1908]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 19:14:33.566191 waagent[1908]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:14:33.566191 waagent[1908]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:14:33.566191 waagent[1908]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:14:33.566509 waagent[1908]: 2025-06-20T19:14:33.566475Z INFO EnvHandler ExtHandler Configure routes Jun 20 19:14:33.566575 waagent[1908]: 2025-06-20T19:14:33.566554Z INFO EnvHandler ExtHandler Gateway:None Jun 20 19:14:33.566620 waagent[1908]: 2025-06-20T19:14:33.566599Z INFO EnvHandler ExtHandler Routes:None Jun 20 19:14:33.567027 waagent[1908]: 2025-06-20T19:14:33.567001Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 19:14:33.578863 waagent[1908]: 2025-06-20T19:14:33.577680Z INFO ExtHandler ExtHandler Jun 20 19:14:33.578863 waagent[1908]: 2025-06-20T19:14:33.577745Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b77ae1d2-19d2-4877-865f-79811d20a583 correlation 59d28de0-d66d-4f18-ba25-7a0fa58400d0 created: 2025-06-20T19:13:42.338676Z] Jun 20 19:14:33.578863 waagent[1908]: 2025-06-20T19:14:33.578095Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 19:14:33.578863 waagent[1908]: 2025-06-20T19:14:33.578578Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 20 19:14:33.607895 waagent[1908]: 2025-06-20T19:14:33.607809Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 19:14:33.607895 waagent[1908]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 19:14:33.608824 waagent[1908]: 2025-06-20T19:14:33.608580Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3D38EF2E-DDF0-458A-A65E-C5C9632918BD;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 19:14:33.610423 waagent[1908]: 2025-06-20T19:14:33.610369Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 19:14:33.610423 waagent[1908]: Executing ['ip', '-a', '-o', 'link']: Jun 20 19:14:33.610423 waagent[1908]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 19:14:33.610423 waagent[1908]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:65:6f:60 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 20 19:14:33.610423 waagent[1908]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:65:6f:60 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jun 20 19:14:33.610423 waagent[1908]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 19:14:33.610423 waagent[1908]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 19:14:33.610423 waagent[1908]: 2: eth0 inet 10.200.4.5/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 19:14:33.610423 waagent[1908]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 19:14:33.610423 waagent[1908]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 19:14:33.610423 waagent[1908]: 2: eth0 inet6 fe80::7e1e:52ff:fe65:6f60/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:14:33.610423 waagent[1908]: 3: enP30832s1 inet6 fe80::7e1e:52ff:fe65:6f60/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:14:33.639187 waagent[1908]: 2025-06-20T19:14:33.639136Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 19:14:33.639187 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:14:33.639187 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:14:33.639187 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:14:33.639187 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:14:33.639187 waagent[1908]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Jun 20 19:14:33.639187 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:14:33.639187 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:14:33.639187 waagent[1908]: 4 586 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:14:33.639187 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:14:33.641946 waagent[1908]: 2025-06-20T19:14:33.641897Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 19:14:33.641946 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:14:33.641946 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:14:33.641946 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:14:33.641946 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:14:33.641946 waagent[1908]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Jun 20 19:14:33.641946 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:14:33.641946 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:14:33.641946 waagent[1908]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:14:33.641946 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:14:41.654546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:14:41.656191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:42.101120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:42.110077 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:42.146621 kubelet[2059]: E0620 19:14:42.146547 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:42.149695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:42.149854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:42.150195 systemd[1]: kubelet.service: Consumed 138ms CPU time, 108.4M memory peak. Jun 20 19:14:52.154655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:14:52.156219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:52.696100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:52.709072 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:52.743452 kubelet[2074]: E0620 19:14:52.743387 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:52.745479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:52.745616 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:52.745957 systemd[1]: kubelet.service: Consumed 133ms CPU time, 110.7M memory peak. Jun 20 19:14:52.947581 chronyd[1713]: Selected source PHC0 Jun 20 19:14:59.381565 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:14:59.382901 systemd[1]: Started sshd@0-10.200.4.5:22-10.200.16.10:39646.service - OpenSSH per-connection server daemon (10.200.16.10:39646). Jun 20 19:15:00.036567 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 39646 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:00.037756 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:00.042257 systemd-logind[1695]: New session 3 of user core. Jun 20 19:15:00.048993 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:15:00.563055 systemd[1]: Started sshd@1-10.200.4.5:22-10.200.16.10:39650.service - OpenSSH per-connection server daemon (10.200.16.10:39650). Jun 20 19:15:01.162863 sshd[2087]: Accepted publickey for core from 10.200.16.10 port 39650 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:01.164074 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:01.168562 systemd-logind[1695]: New session 4 of user core. Jun 20 19:15:01.175998 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:15:01.581619 sshd[2089]: Connection closed by 10.200.16.10 port 39650 Jun 20 19:15:01.582459 sshd-session[2087]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:01.585257 systemd[1]: sshd@1-10.200.4.5:22-10.200.16.10:39650.service: Deactivated successfully. Jun 20 19:15:01.586871 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:15:01.588554 systemd-logind[1695]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:15:01.589644 systemd-logind[1695]: Removed session 4. Jun 20 19:15:01.685897 systemd[1]: Started sshd@2-10.200.4.5:22-10.200.16.10:39660.service - OpenSSH per-connection server daemon (10.200.16.10:39660). Jun 20 19:15:02.285358 sshd[2095]: Accepted publickey for core from 10.200.16.10 port 39660 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:02.286569 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:02.291071 systemd-logind[1695]: New session 5 of user core. Jun 20 19:15:02.297008 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:15:02.699317 sshd[2097]: Connection closed by 10.200.16.10 port 39660 Jun 20 19:15:02.701568 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:02.704103 systemd[1]: sshd@2-10.200.4.5:22-10.200.16.10:39660.service: Deactivated successfully. Jun 20 19:15:02.705677 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:15:02.707166 systemd-logind[1695]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:15:02.708484 systemd-logind[1695]: Removed session 5. Jun 20 19:15:02.803199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:15:02.804737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:02.808078 systemd[1]: Started sshd@3-10.200.4.5:22-10.200.16.10:39666.service - OpenSSH per-connection server daemon (10.200.16.10:39666). Jun 20 19:15:03.268194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:03.274048 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:03.307228 kubelet[2113]: E0620 19:15:03.307157 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:03.309265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:03.309401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:03.309718 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.4M memory peak. Jun 20 19:15:03.402966 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 39666 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:03.404179 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:03.408952 systemd-logind[1695]: New session 6 of user core. Jun 20 19:15:03.415004 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:15:03.821744 sshd[2121]: Connection closed by 10.200.16.10 port 39666 Jun 20 19:15:03.822370 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:03.825139 systemd[1]: sshd@3-10.200.4.5:22-10.200.16.10:39666.service: Deactivated successfully. Jun 20 19:15:03.826820 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:15:03.828267 systemd-logind[1695]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:15:03.829490 systemd-logind[1695]: Removed session 6. Jun 20 19:15:03.939798 systemd[1]: Started sshd@4-10.200.4.5:22-10.200.16.10:39680.service - OpenSSH per-connection server daemon (10.200.16.10:39680). Jun 20 19:15:04.539807 sshd[2127]: Accepted publickey for core from 10.200.16.10 port 39680 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:04.541051 sshd-session[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:04.545536 systemd-logind[1695]: New session 7 of user core. Jun 20 19:15:04.550987 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:15:04.944338 sudo[2130]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:15:04.944565 sudo[2130]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:04.960089 sudo[2130]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:05.058227 sshd[2129]: Connection closed by 10.200.16.10 port 39680 Jun 20 19:15:05.059216 sshd-session[2127]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:05.062362 systemd[1]: sshd@4-10.200.4.5:22-10.200.16.10:39680.service: Deactivated successfully. Jun 20 19:15:05.063955 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:15:05.065269 systemd-logind[1695]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:15:05.066585 systemd-logind[1695]: Removed session 7. Jun 20 19:15:05.163005 systemd[1]: Started sshd@5-10.200.4.5:22-10.200.16.10:39688.service - OpenSSH per-connection server daemon (10.200.16.10:39688). Jun 20 19:15:05.761779 sshd[2136]: Accepted publickey for core from 10.200.16.10 port 39688 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:05.763017 sshd-session[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:05.767543 systemd-logind[1695]: New session 8 of user core. Jun 20 19:15:05.773023 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:15:06.088397 sudo[2140]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:15:06.088624 sudo[2140]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:06.095587 sudo[2140]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:06.099791 sudo[2139]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:15:06.100097 sudo[2139]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:06.108170 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:15:06.142693 augenrules[2162]: No rules Jun 20 19:15:06.143751 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:15:06.143993 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:15:06.144913 sudo[2139]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:06.238915 sshd[2138]: Connection closed by 10.200.16.10 port 39688 Jun 20 19:15:06.239551 sshd-session[2136]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:06.242301 systemd[1]: sshd@5-10.200.4.5:22-10.200.16.10:39688.service: Deactivated successfully. Jun 20 19:15:06.243925 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:15:06.245301 systemd-logind[1695]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:15:06.246616 systemd-logind[1695]: Removed session 8. Jun 20 19:15:06.342868 systemd[1]: Started sshd@6-10.200.4.5:22-10.200.16.10:39700.service - OpenSSH per-connection server daemon (10.200.16.10:39700). Jun 20 19:15:06.940994 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 39700 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:06.942195 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:06.946711 systemd-logind[1695]: New session 9 of user core. Jun 20 19:15:06.952981 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:15:07.263691 sudo[2174]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:15:07.263934 sudo[2174]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:08.352050 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:15:08.361162 (dockerd)[2192]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:15:08.840749 dockerd[2192]: time="2025-06-20T19:15:08.840689815Z" level=info msg="Starting up" Jun 20 19:15:08.842110 dockerd[2192]: time="2025-06-20T19:15:08.842071708Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:15:08.949555 dockerd[2192]: time="2025-06-20T19:15:08.949506617Z" level=info msg="Loading containers: start." Jun 20 19:15:08.983859 kernel: Initializing XFRM netlink socket Jun 20 19:15:09.237980 systemd-networkd[1354]: docker0: Link UP Jun 20 19:15:09.252238 dockerd[2192]: time="2025-06-20T19:15:09.252192486Z" level=info msg="Loading containers: done." Jun 20 19:15:09.277978 dockerd[2192]: time="2025-06-20T19:15:09.277932295Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:15:09.278159 dockerd[2192]: time="2025-06-20T19:15:09.278027687Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:15:09.278159 dockerd[2192]: time="2025-06-20T19:15:09.278127789Z" level=info msg="Initializing buildkit" Jun 20 19:15:09.319815 dockerd[2192]: time="2025-06-20T19:15:09.319763393Z" level=info msg="Completed buildkit initialization" Jun 20 19:15:09.326464 dockerd[2192]: time="2025-06-20T19:15:09.326408521Z" level=info msg="Daemon has completed initialization" Jun 20 19:15:09.326464 dockerd[2192]: time="2025-06-20T19:15:09.326490562Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:15:09.326672 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:15:10.353331 containerd[1726]: time="2025-06-20T19:15:10.353288698Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 19:15:11.023403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021762895.mount: Deactivated successfully. Jun 20 19:15:12.182669 containerd[1726]: time="2025-06-20T19:15:12.182605832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:12.186027 containerd[1726]: time="2025-06-20T19:15:12.185988491Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jun 20 19:15:12.188697 containerd[1726]: time="2025-06-20T19:15:12.188654967Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:12.192273 containerd[1726]: time="2025-06-20T19:15:12.192210549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:12.192989 containerd[1726]: time="2025-06-20T19:15:12.192765733Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.839434363s" Jun 20 19:15:12.192989 containerd[1726]: time="2025-06-20T19:15:12.192800892Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 20 19:15:12.193431 containerd[1726]: time="2025-06-20T19:15:12.193404003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 19:15:13.404688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:15:13.409035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:13.902197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:13.906060 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:13.941716 kubelet[2457]: E0620 19:15:13.941635 2457 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:13.943515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:13.943670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:13.944019 systemd[1]: kubelet.service: Consumed 141ms CPU time, 110.5M memory peak. Jun 20 19:15:13.952613 containerd[1726]: time="2025-06-20T19:15:13.952560525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:13.955364 containerd[1726]: time="2025-06-20T19:15:13.955326175Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jun 20 19:15:13.958902 containerd[1726]: time="2025-06-20T19:15:13.958841951Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:13.962698 containerd[1726]: time="2025-06-20T19:15:13.962654321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:13.963394 containerd[1726]: time="2025-06-20T19:15:13.963262386Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.769768245s" Jun 20 19:15:13.963394 containerd[1726]: time="2025-06-20T19:15:13.963295655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 20 19:15:13.963983 containerd[1726]: time="2025-06-20T19:15:13.963966498Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 19:15:14.835013 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 20 19:15:15.025604 update_engine[1696]: I20250620 19:15:15.025542 1696 update_attempter.cc:509] Updating boot flags... Jun 20 19:15:15.041864 containerd[1726]: time="2025-06-20T19:15:15.041236680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:15.043968 containerd[1726]: time="2025-06-20T19:15:15.043930356Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jun 20 19:15:15.047178 containerd[1726]: time="2025-06-20T19:15:15.047136172Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:15.056084 containerd[1726]: time="2025-06-20T19:15:15.056056936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:15.057887 containerd[1726]: time="2025-06-20T19:15:15.056966974Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.092949536s" Jun 20 19:15:15.057887 containerd[1726]: time="2025-06-20T19:15:15.057007173Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 20 19:15:15.058345 containerd[1726]: time="2025-06-20T19:15:15.058236772Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 19:15:16.163982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414053957.mount: Deactivated successfully. Jun 20 19:15:16.545745 containerd[1726]: time="2025-06-20T19:15:16.545616465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:16.550045 containerd[1726]: time="2025-06-20T19:15:16.549999867Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jun 20 19:15:16.552942 containerd[1726]: time="2025-06-20T19:15:16.552895609Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:16.556629 containerd[1726]: time="2025-06-20T19:15:16.556574410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:16.557160 containerd[1726]: time="2025-06-20T19:15:16.556920382Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.498651535s" Jun 20 19:15:16.557160 containerd[1726]: time="2025-06-20T19:15:16.556952628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 20 19:15:16.557392 containerd[1726]: time="2025-06-20T19:15:16.557374478Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:15:17.166127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172364030.mount: Deactivated successfully. Jun 20 19:15:18.146229 containerd[1726]: time="2025-06-20T19:15:18.146175084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:18.148835 containerd[1726]: time="2025-06-20T19:15:18.148790545Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 20 19:15:18.151536 containerd[1726]: time="2025-06-20T19:15:18.151479570Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:18.155123 containerd[1726]: time="2025-06-20T19:15:18.155070346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:18.155843 containerd[1726]: time="2025-06-20T19:15:18.155718004Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.598277367s" Jun 20 19:15:18.155843 containerd[1726]: time="2025-06-20T19:15:18.155751380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:15:18.156429 containerd[1726]: time="2025-06-20T19:15:18.156412302Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:15:18.738137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547803635.mount: Deactivated successfully. Jun 20 19:15:18.754230 containerd[1726]: time="2025-06-20T19:15:18.754174600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:15:18.757162 containerd[1726]: time="2025-06-20T19:15:18.757125614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:15:18.759905 containerd[1726]: time="2025-06-20T19:15:18.759884263Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:15:18.763263 containerd[1726]: time="2025-06-20T19:15:18.763219213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:15:18.763841 containerd[1726]: time="2025-06-20T19:15:18.763598819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.10017ms" Jun 20 19:15:18.763841 containerd[1726]: time="2025-06-20T19:15:18.763632626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:15:18.764093 containerd[1726]: time="2025-06-20T19:15:18.764069991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 19:15:19.415047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288817440.mount: Deactivated successfully. Jun 20 19:15:21.558411 containerd[1726]: time="2025-06-20T19:15:21.558346400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:21.560756 containerd[1726]: time="2025-06-20T19:15:21.560716846Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jun 20 19:15:21.563339 containerd[1726]: time="2025-06-20T19:15:21.563295642Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:21.566965 containerd[1726]: time="2025-06-20T19:15:21.566918685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:21.567796 containerd[1726]: time="2025-06-20T19:15:21.567636237Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.803490693s" Jun 20 19:15:21.567796 containerd[1726]: time="2025-06-20T19:15:21.567668190Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 20 19:15:24.154697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 19:15:24.158941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:24.384137 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:15:24.384238 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:15:24.384557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:24.387166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:24.412507 systemd[1]: Reload requested from client PID 2652 ('systemctl') (unit session-9.scope)... Jun 20 19:15:24.412522 systemd[1]: Reloading... Jun 20 19:15:24.500877 zram_generator::config[2698]: No configuration found. Jun 20 19:15:24.601723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:15:24.702222 systemd[1]: Reloading finished in 289 ms. Jun 20 19:15:24.736980 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:15:24.737054 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:15:24.737373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:24.737422 systemd[1]: kubelet.service: Consumed 77ms CPU time, 74.4M memory peak. Jun 20 19:15:24.739513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:25.359316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:25.364087 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:15:25.402314 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:15:25.402314 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:15:25.402314 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:15:25.402711 kubelet[2765]: I0620 19:15:25.402395 2765 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:15:25.673912 kubelet[2765]: I0620 19:15:25.673081 2765 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:15:25.673912 kubelet[2765]: I0620 19:15:25.673116 2765 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:15:25.673912 kubelet[2765]: I0620 19:15:25.673542 2765 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:15:25.697674 kubelet[2765]: E0620 19:15:25.697638 2765 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:25.698471 kubelet[2765]: I0620 19:15:25.698453 2765 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:15:25.704596 kubelet[2765]: I0620 19:15:25.704562 2765 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:15:25.707861 kubelet[2765]: I0620 19:15:25.707824 2765 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:15:25.707961 kubelet[2765]: I0620 19:15:25.707949 2765 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:15:25.708088 kubelet[2765]: I0620 19:15:25.708054 2765 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:15:25.708235 kubelet[2765]: I0620 19:15:25.708088 2765 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-657d644de8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:15:25.708355 kubelet[2765]: I0620 19:15:25.708244 2765 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:15:25.708355 kubelet[2765]: I0620 19:15:25.708254 2765 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:15:25.708398 kubelet[2765]: I0620 19:15:25.708364 2765 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:15:25.711127 kubelet[2765]: I0620 19:15:25.710901 2765 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:15:25.711127 kubelet[2765]: I0620 19:15:25.710922 2765 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:15:25.711127 kubelet[2765]: I0620 19:15:25.710955 2765 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:15:25.711127 kubelet[2765]: I0620 19:15:25.710980 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:15:25.718870 kubelet[2765]: W0620 19:15:25.718749 2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-657d644de8&limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jun 20 19:15:25.718870 kubelet[2765]: E0620 19:15:25.718806 2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-657d644de8&limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:25.719720 kubelet[2765]: I0620 19:15:25.719029 2765 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:15:25.719720 kubelet[2765]: I0620 19:15:25.719431 2765 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:15:25.719720 kubelet[2765]: W0620 19:15:25.719482 2765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:15:25.719720 kubelet[2765]: W0620 19:15:25.719598 2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jun 20 19:15:25.719720 kubelet[2765]: E0620 19:15:25.719644 2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:25.721640 kubelet[2765]: I0620 19:15:25.721623 2765 server.go:1274] "Started kubelet" Jun 20 19:15:25.722647 kubelet[2765]: I0620 19:15:25.722249 2765 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:15:25.731843 kubelet[2765]: I0620 19:15:25.731675 2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:15:25.732059 kubelet[2765]: I0620 19:15:25.732044 2765 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:15:25.732730 kubelet[2765]: I0620 19:15:25.732715 2765 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:15:25.737026 kubelet[2765]: E0620 19:15:25.735744 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.5:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-657d644de8.184ad63465cf72a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-657d644de8,UID:ci-4344.1.0-a-657d644de8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-657d644de8,},FirstTimestamp:2025-06-20 19:15:25.721596585 +0000 UTC m=+0.353482112,LastTimestamp:2025-06-20 19:15:25.721596585 +0000 UTC m=+0.353482112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-657d644de8,}" Jun 20 19:15:25.738856 kubelet[2765]: I0620 19:15:25.738692 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:15:25.739753 kubelet[2765]: I0620 19:15:25.739734 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:15:25.743438 kubelet[2765]: E0620 19:15:25.743420 2765 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:15:25.743842 kubelet[2765]: E0620 19:15:25.743614 2765 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-657d644de8\" not found" Jun 20 19:15:25.743842 kubelet[2765]: I0620 19:15:25.743645 2765 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:15:25.743929 kubelet[2765]: I0620 19:15:25.743857 2765 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:15:25.743929 kubelet[2765]: I0620 19:15:25.743899 2765 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:15:25.744602 kubelet[2765]: I0620 19:15:25.744583 2765 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:15:25.744667 kubelet[2765]: I0620 19:15:25.744657 2765 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:15:25.745061 kubelet[2765]: W0620 19:15:25.745025 2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jun 20 19:15:25.745113 kubelet[2765]: E0620 19:15:25.745074 2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:25.746706 kubelet[2765]: E0620 19:15:25.745917 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-657d644de8?timeout=10s\": dial tcp 10.200.4.5:6443: connect: connection refused" interval="200ms" Jun 20 19:15:25.746706 kubelet[2765]: I0620 19:15:25.746066 2765 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:15:25.767519 kubelet[2765]: I0620 19:15:25.767481 2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:15:25.769365 kubelet[2765]: I0620 19:15:25.769340 2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:15:25.769440 kubelet[2765]: I0620 19:15:25.769371 2765 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:15:25.769440 kubelet[2765]: I0620 19:15:25.769389 2765 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:15:25.769440 kubelet[2765]: E0620 19:15:25.769424 2765 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:15:25.770646 kubelet[2765]: W0620 19:15:25.770613 2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jun 20 19:15:25.770726 kubelet[2765]: E0620 19:15:25.770657 2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:25.773464 kubelet[2765]: I0620 19:15:25.773448 2765 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:15:25.773464 kubelet[2765]: I0620 19:15:25.773463 2765 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:15:25.773555 kubelet[2765]: I0620 19:15:25.773476 2765 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:15:25.778534 kubelet[2765]: I0620 19:15:25.778516 2765 policy_none.go:49] "None policy: Start" Jun 20 19:15:25.779050 kubelet[2765]: I0620 19:15:25.779021 2765 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:15:25.779128 kubelet[2765]: I0620 19:15:25.779088 2765 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:15:25.788409 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:15:25.798142 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:15:25.800907 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:15:25.811366 kubelet[2765]: I0620 19:15:25.811340 2765 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:15:25.811544 kubelet[2765]: I0620 19:15:25.811529 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:15:25.811589 kubelet[2765]: I0620 19:15:25.811545 2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:15:25.811915 kubelet[2765]: I0620 19:15:25.811902 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:15:25.813513 kubelet[2765]: E0620 19:15:25.813487 2765 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-657d644de8\" not found" Jun 20 19:15:25.878823 systemd[1]: Created slice kubepods-burstable-podf771b37cc2dd59f0914161678316ad80.slice - libcontainer container kubepods-burstable-podf771b37cc2dd59f0914161678316ad80.slice. Jun 20 19:15:25.899144 systemd[1]: Created slice kubepods-burstable-podcf7c2d19568c175d6d47640b6b8dd4b9.slice - libcontainer container kubepods-burstable-podcf7c2d19568c175d6d47640b6b8dd4b9.slice. Jun 20 19:15:25.910143 systemd[1]: Created slice kubepods-burstable-pod030b87c616fc3a9e69b06c26f903de85.slice - libcontainer container kubepods-burstable-pod030b87c616fc3a9e69b06c26f903de85.slice. Jun 20 19:15:25.913664 kubelet[2765]: I0620 19:15:25.913625 2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:25.914033 kubelet[2765]: E0620 19:15:25.913998 2765 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.5:6443/api/v1/nodes\": dial tcp 10.200.4.5:6443: connect: connection refused" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:25.946887 kubelet[2765]: E0620 19:15:25.946756 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-657d644de8?timeout=10s\": dial tcp 10.200.4.5:6443: connect: connection refused" interval="400ms" Jun 20 19:15:26.045205 kubelet[2765]: I0620 19:15:26.045143 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.045205 kubelet[2765]: I0620 19:15:26.045196 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f771b37cc2dd59f0914161678316ad80-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-657d644de8\" (UID: \"f771b37cc2dd59f0914161678316ad80\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.045205 kubelet[2765]: I0620 19:15:26.045217 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.045411 kubelet[2765]: I0620 19:15:26.045238 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.045411 kubelet[2765]: I0620 19:15:26.045255 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf7c2d19568c175d6d47640b6b8dd4b9-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-657d644de8\" (UID: \"cf7c2d19568c175d6d47640b6b8dd4b9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.045411 kubelet[2765]: I0620 19:15:26.045312 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf7c2d19568c175d6d47640b6b8dd4b9-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-657d644de8\" (UID: \"cf7c2d19568c175d6d47640b6b8dd4b9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.045411 kubelet[2765]: I0620 19:15:26.045333 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf7c2d19568c175d6d47640b6b8dd4b9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-657d644de8\" (UID: \"cf7c2d19568c175d6d47640b6b8dd4b9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.045411 kubelet[2765]: I0620 19:15:26.045349 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.045527 kubelet[2765]: I0620 19:15:26.045369 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.115904 kubelet[2765]: I0620 19:15:26.115876 2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.116247 kubelet[2765]: E0620 19:15:26.116223 2765 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.5:6443/api/v1/nodes\": dial tcp 10.200.4.5:6443: connect: connection refused" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.197443 containerd[1726]: time="2025-06-20T19:15:26.197324263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-657d644de8,Uid:f771b37cc2dd59f0914161678316ad80,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:26.209040 containerd[1726]: time="2025-06-20T19:15:26.209003347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-657d644de8,Uid:cf7c2d19568c175d6d47640b6b8dd4b9,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:26.212808 containerd[1726]: time="2025-06-20T19:15:26.212765350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-657d644de8,Uid:030b87c616fc3a9e69b06c26f903de85,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:26.347972 kubelet[2765]: E0620 19:15:26.347917 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-657d644de8?timeout=10s\": dial tcp 10.200.4.5:6443: connect: connection refused" interval="800ms" Jun 20 19:15:26.522500 kubelet[2765]: I0620 19:15:26.522394 2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.523156 kubelet[2765]: E0620 19:15:26.523116 2765 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.5:6443/api/v1/nodes\": dial tcp 10.200.4.5:6443: connect: connection refused" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:26.573128 kubelet[2765]: W0620 19:15:26.573054 2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jun 20 19:15:26.573128 kubelet[2765]: E0620 19:15:26.573135 2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:26.659124 kubelet[2765]: W0620 19:15:26.659053 2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-657d644de8&limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jun 20 19:15:26.659273 kubelet[2765]: E0620 19:15:26.659137 2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-657d644de8&limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:27.049230 kubelet[2765]: W0620 19:15:27.049161 2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jun 20 19:15:27.049230 kubelet[2765]: E0620 19:15:27.049233 2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:27.097101 containerd[1726]: time="2025-06-20T19:15:27.097059452Z" level=info msg="connecting to shim 0d545ed9143b8088a3e42e68cd2217ad1eed0c2fc74795daf025db98c7f5fc8b" address="unix:///run/containerd/s/bbaa2b72828078b9653506c3019e6023c97b1ff1276f0c04bbe31cbb82ce3d55" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:27.121495 containerd[1726]: time="2025-06-20T19:15:27.121424241Z" level=info msg="connecting to shim 49fa5aa02c6723e782e82576d8ddbecd10059f009e71ac04a1a5d0d1f092bd21" address="unix:///run/containerd/s/2229cdc0702914eece5f610692b4a55307470378d22cbd6ab4a0b8fe9473247a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:27.134001 systemd[1]: Started cri-containerd-0d545ed9143b8088a3e42e68cd2217ad1eed0c2fc74795daf025db98c7f5fc8b.scope - libcontainer container 0d545ed9143b8088a3e42e68cd2217ad1eed0c2fc74795daf025db98c7f5fc8b. Jun 20 19:15:27.134992 containerd[1726]: time="2025-06-20T19:15:27.134778407Z" level=info msg="connecting to shim 7b6d233f35c7039b65b5d13b834577b23ee24dddebe10fd43b9d539f5e0b1de3" address="unix:///run/containerd/s/b9697bfe90a47f31bae8afa7ce9b00ae52bcc9c516ab3a6ea2f4f538562c4b04" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:27.150782 kubelet[2765]: E0620 19:15:27.150656 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-657d644de8?timeout=10s\": dial tcp 10.200.4.5:6443: connect: connection refused" interval="1.6s" Jun 20 19:15:27.158983 systemd[1]: Started cri-containerd-49fa5aa02c6723e782e82576d8ddbecd10059f009e71ac04a1a5d0d1f092bd21.scope - libcontainer container 49fa5aa02c6723e782e82576d8ddbecd10059f009e71ac04a1a5d0d1f092bd21. Jun 20 19:15:27.172426 systemd[1]: Started cri-containerd-7b6d233f35c7039b65b5d13b834577b23ee24dddebe10fd43b9d539f5e0b1de3.scope - libcontainer container 7b6d233f35c7039b65b5d13b834577b23ee24dddebe10fd43b9d539f5e0b1de3. Jun 20 19:15:27.197363 kubelet[2765]: W0620 19:15:27.197099 2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jun 20 19:15:27.199024 kubelet[2765]: E0620 19:15:27.198979 2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:15:27.228856 containerd[1726]: time="2025-06-20T19:15:27.228277380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-657d644de8,Uid:030b87c616fc3a9e69b06c26f903de85,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d545ed9143b8088a3e42e68cd2217ad1eed0c2fc74795daf025db98c7f5fc8b\"" Jun 20 19:15:27.233605 containerd[1726]: time="2025-06-20T19:15:27.233276931Z" level=info msg="CreateContainer within sandbox \"0d545ed9143b8088a3e42e68cd2217ad1eed0c2fc74795daf025db98c7f5fc8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:15:27.239824 containerd[1726]: time="2025-06-20T19:15:27.239788507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-657d644de8,Uid:cf7c2d19568c175d6d47640b6b8dd4b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"49fa5aa02c6723e782e82576d8ddbecd10059f009e71ac04a1a5d0d1f092bd21\"" Jun 20 19:15:27.244521 containerd[1726]: time="2025-06-20T19:15:27.244413635Z" level=info msg="CreateContainer within sandbox \"49fa5aa02c6723e782e82576d8ddbecd10059f009e71ac04a1a5d0d1f092bd21\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:15:27.259776 containerd[1726]: time="2025-06-20T19:15:27.259745255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-657d644de8,Uid:f771b37cc2dd59f0914161678316ad80,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b6d233f35c7039b65b5d13b834577b23ee24dddebe10fd43b9d539f5e0b1de3\"" Jun 20 19:15:27.264610 containerd[1726]: time="2025-06-20T19:15:27.264169527Z" level=info msg="CreateContainer within sandbox \"7b6d233f35c7039b65b5d13b834577b23ee24dddebe10fd43b9d539f5e0b1de3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:15:27.270036 containerd[1726]: time="2025-06-20T19:15:27.270013188Z" level=info msg="Container 5eea35c856fbb90ecaa5201960f31923c6fc5513027cc5e8c49d9f5d3dff69d5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:27.280632 containerd[1726]: time="2025-06-20T19:15:27.280599642Z" level=info msg="Container 9f6dc4eda7092d1f3ee1062fdb4b96c09306c95199ad40946ded2051df8fe8e2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:27.302041 containerd[1726]: time="2025-06-20T19:15:27.301937843Z" level=info msg="CreateContainer within sandbox \"0d545ed9143b8088a3e42e68cd2217ad1eed0c2fc74795daf025db98c7f5fc8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5eea35c856fbb90ecaa5201960f31923c6fc5513027cc5e8c49d9f5d3dff69d5\"" Jun 20 19:15:27.302688 containerd[1726]: time="2025-06-20T19:15:27.302624605Z" level=info msg="StartContainer for \"5eea35c856fbb90ecaa5201960f31923c6fc5513027cc5e8c49d9f5d3dff69d5\"" Jun 20 19:15:27.303533 containerd[1726]: time="2025-06-20T19:15:27.303499290Z" level=info msg="connecting to shim 5eea35c856fbb90ecaa5201960f31923c6fc5513027cc5e8c49d9f5d3dff69d5" address="unix:///run/containerd/s/bbaa2b72828078b9653506c3019e6023c97b1ff1276f0c04bbe31cbb82ce3d55" protocol=ttrpc version=3 Jun 20 19:15:27.305858 containerd[1726]: time="2025-06-20T19:15:27.305283815Z" level=info msg="Container 44df8a2438810e2f9d6b6a9f9face34a86f92c1012222b8c1165c532b1bb7014: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:27.313667 containerd[1726]: time="2025-06-20T19:15:27.313639805Z" level=info msg="CreateContainer within sandbox \"49fa5aa02c6723e782e82576d8ddbecd10059f009e71ac04a1a5d0d1f092bd21\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9f6dc4eda7092d1f3ee1062fdb4b96c09306c95199ad40946ded2051df8fe8e2\"" Jun 20 19:15:27.314394 containerd[1726]: time="2025-06-20T19:15:27.314371310Z" level=info msg="StartContainer for \"9f6dc4eda7092d1f3ee1062fdb4b96c09306c95199ad40946ded2051df8fe8e2\"" Jun 20 19:15:27.315454 containerd[1726]: time="2025-06-20T19:15:27.315219452Z" level=info msg="connecting to shim 9f6dc4eda7092d1f3ee1062fdb4b96c09306c95199ad40946ded2051df8fe8e2" address="unix:///run/containerd/s/2229cdc0702914eece5f610692b4a55307470378d22cbd6ab4a0b8fe9473247a" protocol=ttrpc version=3 Jun 20 19:15:27.322672 containerd[1726]: time="2025-06-20T19:15:27.322623885Z" level=info msg="CreateContainer within sandbox \"7b6d233f35c7039b65b5d13b834577b23ee24dddebe10fd43b9d539f5e0b1de3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44df8a2438810e2f9d6b6a9f9face34a86f92c1012222b8c1165c532b1bb7014\"" Jun 20 19:15:27.323119 containerd[1726]: time="2025-06-20T19:15:27.323095155Z" level=info msg="StartContainer for \"44df8a2438810e2f9d6b6a9f9face34a86f92c1012222b8c1165c532b1bb7014\"" Jun 20 19:15:27.325225 systemd[1]: Started cri-containerd-5eea35c856fbb90ecaa5201960f31923c6fc5513027cc5e8c49d9f5d3dff69d5.scope - libcontainer container 5eea35c856fbb90ecaa5201960f31923c6fc5513027cc5e8c49d9f5d3dff69d5. Jun 20 19:15:27.327221 kubelet[2765]: I0620 19:15:27.327195 2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:27.327618 kubelet[2765]: E0620 19:15:27.327587 2765 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.5:6443/api/v1/nodes\": dial tcp 10.200.4.5:6443: connect: connection refused" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:27.328352 containerd[1726]: time="2025-06-20T19:15:27.328029616Z" level=info msg="connecting to shim 44df8a2438810e2f9d6b6a9f9face34a86f92c1012222b8c1165c532b1bb7014" address="unix:///run/containerd/s/b9697bfe90a47f31bae8afa7ce9b00ae52bcc9c516ab3a6ea2f4f538562c4b04" protocol=ttrpc version=3 Jun 20 19:15:27.345064 systemd[1]: Started cri-containerd-9f6dc4eda7092d1f3ee1062fdb4b96c09306c95199ad40946ded2051df8fe8e2.scope - libcontainer container 9f6dc4eda7092d1f3ee1062fdb4b96c09306c95199ad40946ded2051df8fe8e2. Jun 20 19:15:27.353151 systemd[1]: Started cri-containerd-44df8a2438810e2f9d6b6a9f9face34a86f92c1012222b8c1165c532b1bb7014.scope - libcontainer container 44df8a2438810e2f9d6b6a9f9face34a86f92c1012222b8c1165c532b1bb7014. Jun 20 19:15:27.429620 containerd[1726]: time="2025-06-20T19:15:27.429484913Z" level=info msg="StartContainer for \"44df8a2438810e2f9d6b6a9f9face34a86f92c1012222b8c1165c532b1bb7014\" returns successfully" Jun 20 19:15:27.455852 containerd[1726]: time="2025-06-20T19:15:27.455608181Z" level=info msg="StartContainer for \"9f6dc4eda7092d1f3ee1062fdb4b96c09306c95199ad40946ded2051df8fe8e2\" returns successfully" Jun 20 19:15:27.456343 containerd[1726]: time="2025-06-20T19:15:27.456311978Z" level=info msg="StartContainer for \"5eea35c856fbb90ecaa5201960f31923c6fc5513027cc5e8c49d9f5d3dff69d5\" returns successfully" Jun 20 19:15:28.931936 kubelet[2765]: I0620 19:15:28.931903 2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:29.194279 kubelet[2765]: E0620 19:15:29.193888 2765 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-657d644de8\" not found" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:29.256482 kubelet[2765]: I0620 19:15:29.256431 2765 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:29.305852 kubelet[2765]: E0620 19:15:29.305651 2765 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4344.1.0-a-657d644de8.184ad63465cf72a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-657d644de8,UID:ci-4344.1.0-a-657d644de8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-657d644de8,},FirstTimestamp:2025-06-20 19:15:25.721596585 +0000 UTC m=+0.353482112,LastTimestamp:2025-06-20 19:15:25.721596585 +0000 UTC m=+0.353482112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-657d644de8,}" Jun 20 19:15:29.720106 kubelet[2765]: I0620 19:15:29.720050 2765 apiserver.go:52] "Watching apiserver" Jun 20 19:15:29.744518 kubelet[2765]: I0620 19:15:29.744475 2765 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:15:29.888773 kubelet[2765]: E0620 19:15:29.888734 2765 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4344.1.0-a-657d644de8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-657d644de8" Jun 20 19:15:31.315539 systemd[1]: Reload requested from client PID 3038 ('systemctl') (unit session-9.scope)... Jun 20 19:15:31.315555 systemd[1]: Reloading... Jun 20 19:15:31.399900 zram_generator::config[3084]: No configuration found. Jun 20 19:15:31.536416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:15:31.651157 systemd[1]: Reloading finished in 335 ms. Jun 20 19:15:31.679607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:31.687126 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:15:31.687345 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:31.687392 systemd[1]: kubelet.service: Consumed 721ms CPU time, 129.2M memory peak. Jun 20 19:15:31.690372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:32.631950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:32.641153 (kubelet)[3151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:15:32.682037 kubelet[3151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:15:32.682037 kubelet[3151]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:15:32.682037 kubelet[3151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:15:32.682445 kubelet[3151]: I0620 19:15:32.682098 3151 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:15:32.688586 kubelet[3151]: I0620 19:15:32.688554 3151 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:15:32.688586 kubelet[3151]: I0620 19:15:32.688576 3151 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:15:32.688792 kubelet[3151]: I0620 19:15:32.688779 3151 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:15:32.689748 kubelet[3151]: I0620 19:15:32.689727 3151 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:15:32.691854 kubelet[3151]: I0620 19:15:32.691782 3151 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:15:32.699561 kubelet[3151]: I0620 19:15:32.699470 3151 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:15:32.702697 kubelet[3151]: I0620 19:15:32.702671 3151 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:15:32.702814 kubelet[3151]: I0620 19:15:32.702775 3151 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:15:32.702981 kubelet[3151]: I0620 19:15:32.702894 3151 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:15:32.703135 kubelet[3151]: I0620 19:15:32.702920 3151 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-657d644de8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:15:32.703235 kubelet[3151]: I0620 19:15:32.703147 3151 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:15:32.703235 kubelet[3151]: I0620 19:15:32.703157 3151 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:15:32.703235 kubelet[3151]: I0620 19:15:32.703186 3151 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:15:32.703307 kubelet[3151]: I0620 19:15:32.703303 3151 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:15:32.703329 kubelet[3151]: I0620 19:15:32.703315 3151 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:15:32.704143 kubelet[3151]: I0620 19:15:32.704000 3151 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:15:32.704143 kubelet[3151]: I0620 19:15:32.704055 3151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:15:32.713886 kubelet[3151]: I0620 19:15:32.712603 3151 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:15:32.713886 kubelet[3151]: I0620 19:15:32.713022 3151 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:15:32.715026 kubelet[3151]: I0620 19:15:32.713914 3151 server.go:1274] "Started kubelet" Jun 20 19:15:32.716366 kubelet[3151]: I0620 19:15:32.716341 3151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:15:32.723116 kubelet[3151]: I0620 19:15:32.723081 3151 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:15:32.724168 kubelet[3151]: I0620 19:15:32.724148 3151 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:15:32.725542 kubelet[3151]: I0620 19:15:32.725504 3151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:15:32.725625 kubelet[3151]: I0620 19:15:32.725616 3151 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:15:32.725718 kubelet[3151]: I0620 19:15:32.725707 3151 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:15:32.725761 kubelet[3151]: I0620 19:15:32.725526 3151 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:15:32.726027 kubelet[3151]: I0620 19:15:32.726013 3151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:15:32.726624 kubelet[3151]: I0620 19:15:32.726161 3151 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:15:32.727307 kubelet[3151]: I0620 19:15:32.727291 3151 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:15:32.727484 kubelet[3151]: I0620 19:15:32.727469 3151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:15:32.732849 kubelet[3151]: I0620 19:15:32.732049 3151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:15:32.732849 kubelet[3151]: E0620 19:15:32.732776 3151 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:15:32.733849 kubelet[3151]: I0620 19:15:32.733107 3151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:15:32.733849 kubelet[3151]: I0620 19:15:32.733132 3151 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:15:32.733849 kubelet[3151]: I0620 19:15:32.733149 3151 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:15:32.733849 kubelet[3151]: E0620 19:15:32.733187 3151 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:15:32.737306 kubelet[3151]: I0620 19:15:32.737282 3151 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:15:32.787304 kubelet[3151]: I0620 19:15:32.787277 3151 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:15:32.787485 kubelet[3151]: I0620 19:15:32.787313 3151 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:15:32.787485 kubelet[3151]: I0620 19:15:32.787334 3151 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:15:32.787485 kubelet[3151]: I0620 19:15:32.787480 3151 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:15:32.787586 kubelet[3151]: I0620 19:15:32.787490 3151 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:15:32.787586 kubelet[3151]: I0620 19:15:32.787511 3151 policy_none.go:49] "None policy: Start" Jun 20 19:15:32.788016 kubelet[3151]: I0620 19:15:32.788006 3151 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:15:32.788093 kubelet[3151]: I0620 19:15:32.788086 3151 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:15:32.788261 kubelet[3151]: I0620 19:15:32.788248 3151 state_mem.go:75] "Updated machine memory state" Jun 20 19:15:32.792128 kubelet[3151]: I0620 19:15:32.792077 3151 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:15:32.792256 kubelet[3151]: I0620 19:15:32.792246 3151 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:15:32.792289 kubelet[3151]: I0620 19:15:32.792261 3151 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:15:32.792482 kubelet[3151]: I0620 19:15:32.792471 3151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:15:32.839506 kubelet[3151]: W0620 19:15:32.839440 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:15:32.843568 kubelet[3151]: W0620 19:15:32.843531 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:15:32.843931 kubelet[3151]: W0620 19:15:32.843878 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:15:32.899995 kubelet[3151]: I0620 19:15:32.899816 3151 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.912875 kubelet[3151]: I0620 19:15:32.912723 3151 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.912875 kubelet[3151]: I0620 19:15:32.912806 3151 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.927567 kubelet[3151]: I0620 19:15:32.927536 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf7c2d19568c175d6d47640b6b8dd4b9-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-657d644de8\" (UID: \"cf7c2d19568c175d6d47640b6b8dd4b9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.927821 kubelet[3151]: I0620 19:15:32.927752 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.927821 kubelet[3151]: I0620 19:15:32.927778 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.927952 kubelet[3151]: I0620 19:15:32.927856 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.927952 kubelet[3151]: I0620 19:15:32.927893 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.927952 kubelet[3151]: I0620 19:15:32.927934 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f771b37cc2dd59f0914161678316ad80-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-657d644de8\" (UID: \"f771b37cc2dd59f0914161678316ad80\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.928039 kubelet[3151]: I0620 19:15:32.927952 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf7c2d19568c175d6d47640b6b8dd4b9-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-657d644de8\" (UID: \"cf7c2d19568c175d6d47640b6b8dd4b9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.928039 kubelet[3151]: I0620 19:15:32.927969 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/030b87c616fc3a9e69b06c26f903de85-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-657d644de8\" (UID: \"030b87c616fc3a9e69b06c26f903de85\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" Jun 20 19:15:32.928039 kubelet[3151]: I0620 19:15:32.928013 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf7c2d19568c175d6d47640b6b8dd4b9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-657d644de8\" (UID: \"cf7c2d19568c175d6d47640b6b8dd4b9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-657d644de8" Jun 20 19:15:33.706161 kubelet[3151]: I0620 19:15:33.706114 3151 apiserver.go:52] "Watching apiserver" Jun 20 19:15:33.725838 kubelet[3151]: I0620 19:15:33.725804 3151 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:15:33.782775 kubelet[3151]: W0620 19:15:33.782656 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:15:33.783445 kubelet[3151]: E0620 19:15:33.782970 3151 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.0-a-657d644de8\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-657d644de8" Jun 20 19:15:33.801977 kubelet[3151]: I0620 19:15:33.801916 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-657d644de8" podStartSLOduration=1.801898703 podStartE2EDuration="1.801898703s" podCreationTimestamp="2025-06-20 19:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:33.792809626 +0000 UTC m=+1.147708571" watchObservedRunningTime="2025-06-20 19:15:33.801898703 +0000 UTC m=+1.156797649" Jun 20 19:15:33.812104 kubelet[3151]: I0620 19:15:33.812051 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-657d644de8" podStartSLOduration=1.812032461 podStartE2EDuration="1.812032461s" podCreationTimestamp="2025-06-20 19:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:33.80240985 +0000 UTC m=+1.157308799" watchObservedRunningTime="2025-06-20 19:15:33.812032461 +0000 UTC m=+1.166931398" Jun 20 19:15:33.821117 kubelet[3151]: I0620 19:15:33.821073 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-657d644de8" podStartSLOduration=1.8210595170000001 podStartE2EDuration="1.821059517s" podCreationTimestamp="2025-06-20 19:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:33.81243786 +0000 UTC m=+1.167336804" watchObservedRunningTime="2025-06-20 19:15:33.821059517 +0000 UTC m=+1.175958474" Jun 20 19:15:36.018010 kubelet[3151]: I0620 19:15:36.017972 3151 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:15:36.018509 containerd[1726]: time="2025-06-20T19:15:36.018355414Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:15:36.018704 kubelet[3151]: I0620 19:15:36.018607 3151 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:15:36.168988 systemd[1]: Created slice kubepods-besteffort-pod0de50b73_c582_4a46_a956_cae2d5f132d7.slice - libcontainer container kubepods-besteffort-pod0de50b73_c582_4a46_a956_cae2d5f132d7.slice. Jun 20 19:15:36.247519 kubelet[3151]: I0620 19:15:36.247353 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0de50b73-c582-4a46-a956-cae2d5f132d7-xtables-lock\") pod \"kube-proxy-2bxcc\" (UID: \"0de50b73-c582-4a46-a956-cae2d5f132d7\") " pod="kube-system/kube-proxy-2bxcc" Jun 20 19:15:36.247519 kubelet[3151]: I0620 19:15:36.247400 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0de50b73-c582-4a46-a956-cae2d5f132d7-lib-modules\") pod \"kube-proxy-2bxcc\" (UID: \"0de50b73-c582-4a46-a956-cae2d5f132d7\") " pod="kube-system/kube-proxy-2bxcc" Jun 20 19:15:36.247519 kubelet[3151]: I0620 19:15:36.247421 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6ms2\" (UniqueName: \"kubernetes.io/projected/0de50b73-c582-4a46-a956-cae2d5f132d7-kube-api-access-h6ms2\") pod \"kube-proxy-2bxcc\" (UID: \"0de50b73-c582-4a46-a956-cae2d5f132d7\") " pod="kube-system/kube-proxy-2bxcc" Jun 20 19:15:36.247519 kubelet[3151]: I0620 19:15:36.247440 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0de50b73-c582-4a46-a956-cae2d5f132d7-kube-proxy\") pod \"kube-proxy-2bxcc\" (UID: \"0de50b73-c582-4a46-a956-cae2d5f132d7\") " pod="kube-system/kube-proxy-2bxcc" Jun 20 19:15:36.352251 kubelet[3151]: E0620 19:15:36.352127 3151 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:15:36.352251 kubelet[3151]: E0620 19:15:36.352158 3151 projected.go:194] Error preparing data for projected volume kube-api-access-h6ms2 for pod kube-system/kube-proxy-2bxcc: configmap "kube-root-ca.crt" not found Jun 20 19:15:36.352251 kubelet[3151]: E0620 19:15:36.352222 3151 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0de50b73-c582-4a46-a956-cae2d5f132d7-kube-api-access-h6ms2 podName:0de50b73-c582-4a46-a956-cae2d5f132d7 nodeName:}" failed. No retries permitted until 2025-06-20 19:15:36.852200677 +0000 UTC m=+4.207099612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h6ms2" (UniqueName: "kubernetes.io/projected/0de50b73-c582-4a46-a956-cae2d5f132d7-kube-api-access-h6ms2") pod "kube-proxy-2bxcc" (UID: "0de50b73-c582-4a46-a956-cae2d5f132d7") : configmap "kube-root-ca.crt" not found Jun 20 19:15:37.078737 containerd[1726]: time="2025-06-20T19:15:37.078693784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bxcc,Uid:0de50b73-c582-4a46-a956-cae2d5f132d7,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:37.127652 containerd[1726]: time="2025-06-20T19:15:37.126809849Z" level=info msg="connecting to shim 70de7336f3cf83ad3a595e003fc1e0942472af4f31fa28ec58a595a58e2d4f92" address="unix:///run/containerd/s/27b310d1faed353ee16d66635127eb5138456d589e5806abe8fb037bb5a5fcf3" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:37.145993 kubelet[3151]: W0620 19:15:37.145959 3151 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4344.1.0-a-657d644de8" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4344.1.0-a-657d644de8' and this object Jun 20 19:15:37.146357 kubelet[3151]: E0620 19:15:37.146010 3151 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4344.1.0-a-657d644de8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4344.1.0-a-657d644de8' and this object" logger="UnhandledError" Jun 20 19:15:37.149282 kubelet[3151]: W0620 19:15:37.148267 3151 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4344.1.0-a-657d644de8" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4344.1.0-a-657d644de8' and this object Jun 20 19:15:37.149282 kubelet[3151]: E0620 19:15:37.148325 3151 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4344.1.0-a-657d644de8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4344.1.0-a-657d644de8' and this object" logger="UnhandledError" Jun 20 19:15:37.151876 kubelet[3151]: I0620 19:15:37.151847 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skt52\" (UniqueName: \"kubernetes.io/projected/a402c792-a6e9-4439-8e94-cdf844b0bae2-kube-api-access-skt52\") pod \"tigera-operator-6c78c649f6-qw4md\" (UID: \"a402c792-a6e9-4439-8e94-cdf844b0bae2\") " pod="tigera-operator/tigera-operator-6c78c649f6-qw4md" Jun 20 19:15:37.151976 kubelet[3151]: I0620 19:15:37.151889 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a402c792-a6e9-4439-8e94-cdf844b0bae2-var-lib-calico\") pod \"tigera-operator-6c78c649f6-qw4md\" (UID: \"a402c792-a6e9-4439-8e94-cdf844b0bae2\") " pod="tigera-operator/tigera-operator-6c78c649f6-qw4md" Jun 20 19:15:37.155406 systemd[1]: Created slice kubepods-besteffort-poda402c792_a6e9_4439_8e94_cdf844b0bae2.slice - libcontainer container kubepods-besteffort-poda402c792_a6e9_4439_8e94_cdf844b0bae2.slice. Jun 20 19:15:37.174969 systemd[1]: Started cri-containerd-70de7336f3cf83ad3a595e003fc1e0942472af4f31fa28ec58a595a58e2d4f92.scope - libcontainer container 70de7336f3cf83ad3a595e003fc1e0942472af4f31fa28ec58a595a58e2d4f92. Jun 20 19:15:37.198546 containerd[1726]: time="2025-06-20T19:15:37.198509096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bxcc,Uid:0de50b73-c582-4a46-a956-cae2d5f132d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"70de7336f3cf83ad3a595e003fc1e0942472af4f31fa28ec58a595a58e2d4f92\"" Jun 20 19:15:37.201194 containerd[1726]: time="2025-06-20T19:15:37.201163853Z" level=info msg="CreateContainer within sandbox \"70de7336f3cf83ad3a595e003fc1e0942472af4f31fa28ec58a595a58e2d4f92\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:15:37.229902 containerd[1726]: time="2025-06-20T19:15:37.229283039Z" level=info msg="Container c4e52742a6d2b9633186a7f64133d88128891c36ecee789b584e0aa22bc79624: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:37.246927 containerd[1726]: time="2025-06-20T19:15:37.246889705Z" level=info msg="CreateContainer within sandbox \"70de7336f3cf83ad3a595e003fc1e0942472af4f31fa28ec58a595a58e2d4f92\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4e52742a6d2b9633186a7f64133d88128891c36ecee789b584e0aa22bc79624\"" Jun 20 19:15:37.247610 containerd[1726]: time="2025-06-20T19:15:37.247566666Z" level=info msg="StartContainer for \"c4e52742a6d2b9633186a7f64133d88128891c36ecee789b584e0aa22bc79624\"" Jun 20 19:15:37.249112 containerd[1726]: time="2025-06-20T19:15:37.249075749Z" level=info msg="connecting to shim c4e52742a6d2b9633186a7f64133d88128891c36ecee789b584e0aa22bc79624" address="unix:///run/containerd/s/27b310d1faed353ee16d66635127eb5138456d589e5806abe8fb037bb5a5fcf3" protocol=ttrpc version=3 Jun 20 19:15:37.265992 systemd[1]: Started cri-containerd-c4e52742a6d2b9633186a7f64133d88128891c36ecee789b584e0aa22bc79624.scope - libcontainer container c4e52742a6d2b9633186a7f64133d88128891c36ecee789b584e0aa22bc79624. Jun 20 19:15:37.298324 containerd[1726]: time="2025-06-20T19:15:37.298284825Z" level=info msg="StartContainer for \"c4e52742a6d2b9633186a7f64133d88128891c36ecee789b584e0aa22bc79624\" returns successfully" Jun 20 19:15:38.062570 containerd[1726]: time="2025-06-20T19:15:38.062525393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-qw4md,Uid:a402c792-a6e9-4439-8e94-cdf844b0bae2,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:15:38.105371 containerd[1726]: time="2025-06-20T19:15:38.105289634Z" level=info msg="connecting to shim d7aa00c7d64c807f7e517acb7f9c7b31421a2cb397209ece9e23f2aae5f9b359" address="unix:///run/containerd/s/3e57c4138a696ac6f284fa19871533454fa24a9b50b369ec7a4108ce82fc0480" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:38.129999 systemd[1]: Started cri-containerd-d7aa00c7d64c807f7e517acb7f9c7b31421a2cb397209ece9e23f2aae5f9b359.scope - libcontainer container d7aa00c7d64c807f7e517acb7f9c7b31421a2cb397209ece9e23f2aae5f9b359. Jun 20 19:15:38.169744 containerd[1726]: time="2025-06-20T19:15:38.169689527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-qw4md,Uid:a402c792-a6e9-4439-8e94-cdf844b0bae2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d7aa00c7d64c807f7e517acb7f9c7b31421a2cb397209ece9e23f2aae5f9b359\"" Jun 20 19:15:38.171335 containerd[1726]: time="2025-06-20T19:15:38.171301368Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:15:39.210959 kubelet[3151]: I0620 19:15:39.210887 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2bxcc" podStartSLOduration=3.210866242 podStartE2EDuration="3.210866242s" podCreationTimestamp="2025-06-20 19:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:37.791778619 +0000 UTC m=+5.146677747" watchObservedRunningTime="2025-06-20 19:15:39.210866242 +0000 UTC m=+6.565765188" Jun 20 19:15:40.021132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857542329.mount: Deactivated successfully. Jun 20 19:15:40.457347 containerd[1726]: time="2025-06-20T19:15:40.457307675Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:40.459499 containerd[1726]: time="2025-06-20T19:15:40.459461379Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:15:40.462336 containerd[1726]: time="2025-06-20T19:15:40.462276484Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:40.466069 containerd[1726]: time="2025-06-20T19:15:40.466033704Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:40.466676 containerd[1726]: time="2025-06-20T19:15:40.466467778Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 2.295128696s" Jun 20 19:15:40.466676 containerd[1726]: time="2025-06-20T19:15:40.466497064Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:15:40.468521 containerd[1726]: time="2025-06-20T19:15:40.468492592Z" level=info msg="CreateContainer within sandbox \"d7aa00c7d64c807f7e517acb7f9c7b31421a2cb397209ece9e23f2aae5f9b359\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:15:40.486305 containerd[1726]: time="2025-06-20T19:15:40.486273954Z" level=info msg="Container 0f543ed9c12503f5bb395ababed6645e28f81e8ca8609fd2ea8f42e2376a58e4: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:40.490926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42567141.mount: Deactivated successfully. Jun 20 19:15:40.499710 containerd[1726]: time="2025-06-20T19:15:40.499683755Z" level=info msg="CreateContainer within sandbox \"d7aa00c7d64c807f7e517acb7f9c7b31421a2cb397209ece9e23f2aae5f9b359\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0f543ed9c12503f5bb395ababed6645e28f81e8ca8609fd2ea8f42e2376a58e4\"" Jun 20 19:15:40.500282 containerd[1726]: time="2025-06-20T19:15:40.500254772Z" level=info msg="StartContainer for \"0f543ed9c12503f5bb395ababed6645e28f81e8ca8609fd2ea8f42e2376a58e4\"" Jun 20 19:15:40.501033 containerd[1726]: time="2025-06-20T19:15:40.500999946Z" level=info msg="connecting to shim 0f543ed9c12503f5bb395ababed6645e28f81e8ca8609fd2ea8f42e2376a58e4" address="unix:///run/containerd/s/3e57c4138a696ac6f284fa19871533454fa24a9b50b369ec7a4108ce82fc0480" protocol=ttrpc version=3 Jun 20 19:15:40.520099 systemd[1]: Started cri-containerd-0f543ed9c12503f5bb395ababed6645e28f81e8ca8609fd2ea8f42e2376a58e4.scope - libcontainer container 0f543ed9c12503f5bb395ababed6645e28f81e8ca8609fd2ea8f42e2376a58e4. Jun 20 19:15:40.548213 containerd[1726]: time="2025-06-20T19:15:40.548173134Z" level=info msg="StartContainer for \"0f543ed9c12503f5bb395ababed6645e28f81e8ca8609fd2ea8f42e2376a58e4\" returns successfully" Jun 20 19:15:40.800435 kubelet[3151]: I0620 19:15:40.800197 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6c78c649f6-qw4md" podStartSLOduration=1.503731844 podStartE2EDuration="3.800178059s" podCreationTimestamp="2025-06-20 19:15:37 +0000 UTC" firstStartedPulling="2025-06-20 19:15:38.170802739 +0000 UTC m=+5.525701691" lastFinishedPulling="2025-06-20 19:15:40.467248964 +0000 UTC m=+7.822147906" observedRunningTime="2025-06-20 19:15:40.800018893 +0000 UTC m=+8.154917843" watchObservedRunningTime="2025-06-20 19:15:40.800178059 +0000 UTC m=+8.155077006" Jun 20 19:15:46.414940 sudo[2174]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:46.611533 sshd[2173]: Connection closed by 10.200.16.10 port 39700 Jun 20 19:15:46.612207 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:46.616785 systemd-logind[1695]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:15:46.618769 systemd[1]: sshd@6-10.200.4.5:22-10.200.16.10:39700.service: Deactivated successfully. Jun 20 19:15:46.623801 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:15:46.626128 systemd[1]: session-9.scope: Consumed 4.108s CPU time, 223.8M memory peak. Jun 20 19:15:46.631045 systemd-logind[1695]: Removed session 9. Jun 20 19:15:51.242689 systemd[1]: Created slice kubepods-besteffort-pod7ea61090_1592_4f38_8f5b_e785da7bac63.slice - libcontainer container kubepods-besteffort-pod7ea61090_1592_4f38_8f5b_e785da7bac63.slice. Jun 20 19:15:51.337142 kubelet[3151]: I0620 19:15:51.337089 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fz2x\" (UniqueName: \"kubernetes.io/projected/7ea61090-1592-4f38-8f5b-e785da7bac63-kube-api-access-4fz2x\") pod \"calico-typha-74fdc8bb75-kzq7b\" (UID: \"7ea61090-1592-4f38-8f5b-e785da7bac63\") " pod="calico-system/calico-typha-74fdc8bb75-kzq7b" Jun 20 19:15:51.337142 kubelet[3151]: I0620 19:15:51.337146 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7ea61090-1592-4f38-8f5b-e785da7bac63-typha-certs\") pod \"calico-typha-74fdc8bb75-kzq7b\" (UID: \"7ea61090-1592-4f38-8f5b-e785da7bac63\") " pod="calico-system/calico-typha-74fdc8bb75-kzq7b" Jun 20 19:15:51.337586 kubelet[3151]: I0620 19:15:51.337168 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ea61090-1592-4f38-8f5b-e785da7bac63-tigera-ca-bundle\") pod \"calico-typha-74fdc8bb75-kzq7b\" (UID: \"7ea61090-1592-4f38-8f5b-e785da7bac63\") " pod="calico-system/calico-typha-74fdc8bb75-kzq7b" Jun 20 19:15:51.547530 containerd[1726]: time="2025-06-20T19:15:51.547396335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74fdc8bb75-kzq7b,Uid:7ea61090-1592-4f38-8f5b-e785da7bac63,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:51.570465 systemd[1]: Created slice kubepods-besteffort-podec682242_f941_400f_ab0d_421682dbb239.slice - libcontainer container kubepods-besteffort-podec682242_f941_400f_ab0d_421682dbb239.slice. Jun 20 19:15:51.597385 containerd[1726]: time="2025-06-20T19:15:51.597297993Z" level=info msg="connecting to shim 5cee88e4dde21d3f0fcb66e87653a63a5d37821307d7b7f3f0326b354d85f780" address="unix:///run/containerd/s/a0888431a015d46992948a290d6d3819e2adacec76d46113cdbc010023f9cce4" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:51.632991 systemd[1]: Started cri-containerd-5cee88e4dde21d3f0fcb66e87653a63a5d37821307d7b7f3f0326b354d85f780.scope - libcontainer container 5cee88e4dde21d3f0fcb66e87653a63a5d37821307d7b7f3f0326b354d85f780. Jun 20 19:15:51.639259 kubelet[3151]: I0620 19:15:51.639158 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-cni-log-dir\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639259 kubelet[3151]: I0620 19:15:51.639210 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-lib-modules\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639259 kubelet[3151]: I0620 19:15:51.639230 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-policysync\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639620 kubelet[3151]: I0620 19:15:51.639344 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-var-run-calico\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639620 kubelet[3151]: I0620 19:15:51.639366 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-cni-bin-dir\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639620 kubelet[3151]: I0620 19:15:51.639382 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec682242-f941-400f-ab0d-421682dbb239-node-certs\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639620 kubelet[3151]: I0620 19:15:51.639398 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-cni-net-dir\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639620 kubelet[3151]: I0620 19:15:51.639519 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-xtables-lock\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639943 kubelet[3151]: I0620 19:15:51.639542 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-var-lib-calico\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639943 kubelet[3151]: I0620 19:15:51.639650 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn4r\" (UniqueName: \"kubernetes.io/projected/ec682242-f941-400f-ab0d-421682dbb239-kube-api-access-pfn4r\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639943 kubelet[3151]: I0620 19:15:51.639682 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec682242-f941-400f-ab0d-421682dbb239-tigera-ca-bundle\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.639943 kubelet[3151]: I0620 19:15:51.639699 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec682242-f941-400f-ab0d-421682dbb239-flexvol-driver-host\") pod \"calico-node-tnznw\" (UID: \"ec682242-f941-400f-ab0d-421682dbb239\") " pod="calico-system/calico-node-tnznw" Jun 20 19:15:51.681147 containerd[1726]: time="2025-06-20T19:15:51.681040966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74fdc8bb75-kzq7b,Uid:7ea61090-1592-4f38-8f5b-e785da7bac63,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cee88e4dde21d3f0fcb66e87653a63a5d37821307d7b7f3f0326b354d85f780\"" Jun 20 19:15:51.683047 containerd[1726]: time="2025-06-20T19:15:51.683013741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:15:51.741926 kubelet[3151]: E0620 19:15:51.741894 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.741926 kubelet[3151]: W0620 19:15:51.741919 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.742090 kubelet[3151]: E0620 19:15:51.741942 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.742090 kubelet[3151]: E0620 19:15:51.742069 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.742090 kubelet[3151]: W0620 19:15:51.742075 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.742162 kubelet[3151]: E0620 19:15:51.742090 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.742546 kubelet[3151]: E0620 19:15:51.742236 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.742546 kubelet[3151]: W0620 19:15:51.742255 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.742546 kubelet[3151]: E0620 19:15:51.742269 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.743062 kubelet[3151]: E0620 19:15:51.743027 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.743062 kubelet[3151]: W0620 19:15:51.743047 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.743264 kubelet[3151]: E0620 19:15:51.743076 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.743553 kubelet[3151]: E0620 19:15:51.743519 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.743553 kubelet[3151]: W0620 19:15:51.743532 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.743732 kubelet[3151]: E0620 19:15:51.743558 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.744460 kubelet[3151]: E0620 19:15:51.744432 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.744460 kubelet[3151]: W0620 19:15:51.744453 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.745032 kubelet[3151]: E0620 19:15:51.744477 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.745032 kubelet[3151]: E0620 19:15:51.744935 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.745032 kubelet[3151]: W0620 19:15:51.744944 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.745032 kubelet[3151]: E0620 19:15:51.745022 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.745255 kubelet[3151]: E0620 19:15:51.745245 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.745255 kubelet[3151]: W0620 19:15:51.745255 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.745421 kubelet[3151]: E0620 19:15:51.745265 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.749255 kubelet[3151]: E0620 19:15:51.749238 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.749415 kubelet[3151]: W0620 19:15:51.749369 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.749415 kubelet[3151]: E0620 19:15:51.749383 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.755722 kubelet[3151]: E0620 19:15:51.755692 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.755722 kubelet[3151]: W0620 19:15:51.755718 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.755876 kubelet[3151]: E0620 19:15:51.755730 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.849383 kubelet[3151]: E0620 19:15:51.848622 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:15:51.876433 containerd[1726]: time="2025-06-20T19:15:51.876142776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tnznw,Uid:ec682242-f941-400f-ab0d-421682dbb239,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:51.916282 containerd[1726]: time="2025-06-20T19:15:51.916140602Z" level=info msg="connecting to shim 6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290" address="unix:///run/containerd/s/98f90904052c37598a619e40134a52206f6473d54e880cf63f354221d4f0a7d6" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:51.935010 systemd[1]: Started cri-containerd-6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290.scope - libcontainer container 6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290. Jun 20 19:15:51.943032 kubelet[3151]: E0620 19:15:51.942995 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.943032 kubelet[3151]: W0620 19:15:51.943025 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.943154 kubelet[3151]: E0620 19:15:51.943049 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.943281 kubelet[3151]: E0620 19:15:51.943272 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.943315 kubelet[3151]: W0620 19:15:51.943282 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.943315 kubelet[3151]: E0620 19:15:51.943292 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.943407 kubelet[3151]: E0620 19:15:51.943399 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.943436 kubelet[3151]: W0620 19:15:51.943408 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.943436 kubelet[3151]: E0620 19:15:51.943415 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.944101 kubelet[3151]: E0620 19:15:51.943514 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.944101 kubelet[3151]: W0620 19:15:51.943519 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.944101 kubelet[3151]: E0620 19:15:51.943526 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.944101 kubelet[3151]: E0620 19:15:51.943614 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.944101 kubelet[3151]: W0620 19:15:51.943618 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.944101 kubelet[3151]: E0620 19:15:51.943625 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.944101 kubelet[3151]: E0620 19:15:51.943707 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.944101 kubelet[3151]: W0620 19:15:51.943712 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.944101 kubelet[3151]: E0620 19:15:51.943717 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.944101 kubelet[3151]: E0620 19:15:51.943809 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.944381 kubelet[3151]: W0620 19:15:51.943816 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.944381 kubelet[3151]: E0620 19:15:51.943823 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.944381 kubelet[3151]: E0620 19:15:51.943925 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.944381 kubelet[3151]: W0620 19:15:51.943930 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.944381 kubelet[3151]: E0620 19:15:51.943937 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.944381 kubelet[3151]: E0620 19:15:51.944074 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.944381 kubelet[3151]: W0620 19:15:51.944081 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.944381 kubelet[3151]: E0620 19:15:51.944090 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.944803 kubelet[3151]: E0620 19:15:51.944592 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.944803 kubelet[3151]: W0620 19:15:51.944606 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.944803 kubelet[3151]: E0620 19:15:51.944620 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.944803 kubelet[3151]: E0620 19:15:51.944757 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.944803 kubelet[3151]: W0620 19:15:51.944763 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.944803 kubelet[3151]: E0620 19:15:51.944772 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.945198 kubelet[3151]: E0620 19:15:51.945134 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.945198 kubelet[3151]: W0620 19:15:51.945145 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.945198 kubelet[3151]: E0620 19:15:51.945157 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.945373 kubelet[3151]: E0620 19:15:51.945367 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.945409 kubelet[3151]: W0620 19:15:51.945402 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.945442 kubelet[3151]: E0620 19:15:51.945436 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.945575 kubelet[3151]: E0620 19:15:51.945557 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.945641 kubelet[3151]: W0620 19:15:51.945607 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.945641 kubelet[3151]: E0620 19:15:51.945616 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.945934 kubelet[3151]: E0620 19:15:51.945882 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.945934 kubelet[3151]: W0620 19:15:51.945893 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.945934 kubelet[3151]: E0620 19:15:51.945902 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.946175 kubelet[3151]: E0620 19:15:51.946115 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.946175 kubelet[3151]: W0620 19:15:51.946123 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.946175 kubelet[3151]: E0620 19:15:51.946131 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.946420 kubelet[3151]: E0620 19:15:51.946380 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.946420 kubelet[3151]: W0620 19:15:51.946388 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.946420 kubelet[3151]: E0620 19:15:51.946397 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.946619 kubelet[3151]: E0620 19:15:51.946577 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.946619 kubelet[3151]: W0620 19:15:51.946585 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.946619 kubelet[3151]: E0620 19:15:51.946592 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.946838 kubelet[3151]: E0620 19:15:51.946814 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.946921 kubelet[3151]: W0620 19:15:51.946822 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.946921 kubelet[3151]: E0620 19:15:51.946892 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.947128 kubelet[3151]: E0620 19:15:51.947074 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:51.947128 kubelet[3151]: W0620 19:15:51.947081 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:51.947128 kubelet[3151]: E0620 19:15:51.947089 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:51.960155 containerd[1726]: time="2025-06-20T19:15:51.960079392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tnznw,Uid:ec682242-f941-400f-ab0d-421682dbb239,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290\"" Jun 20 19:15:52.043203 kubelet[3151]: E0620 19:15:52.043171 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.043203 kubelet[3151]: W0620 19:15:52.043194 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.043203 kubelet[3151]: E0620 19:15:52.043216 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.043557 kubelet[3151]: I0620 19:15:52.043246 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48026b39-3c6c-4056-8581-3b693c168b53-socket-dir\") pod \"csi-node-driver-4b5m5\" (UID: \"48026b39-3c6c-4056-8581-3b693c168b53\") " pod="calico-system/csi-node-driver-4b5m5" Jun 20 19:15:52.043557 kubelet[3151]: E0620 19:15:52.043362 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.043557 kubelet[3151]: W0620 19:15:52.043371 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.043557 kubelet[3151]: E0620 19:15:52.043392 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.043557 kubelet[3151]: I0620 19:15:52.043407 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/48026b39-3c6c-4056-8581-3b693c168b53-varrun\") pod \"csi-node-driver-4b5m5\" (UID: \"48026b39-3c6c-4056-8581-3b693c168b53\") " pod="calico-system/csi-node-driver-4b5m5" Jun 20 19:15:52.043696 kubelet[3151]: E0620 19:15:52.043564 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.043696 kubelet[3151]: W0620 19:15:52.043574 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.043696 kubelet[3151]: E0620 19:15:52.043588 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.043772 kubelet[3151]: E0620 19:15:52.043732 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.043772 kubelet[3151]: W0620 19:15:52.043738 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.043772 kubelet[3151]: E0620 19:15:52.043751 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.043903 kubelet[3151]: E0620 19:15:52.043892 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.043903 kubelet[3151]: W0620 19:15:52.043902 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.043957 kubelet[3151]: E0620 19:15:52.043915 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.044020 kubelet[3151]: I0620 19:15:52.043933 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48026b39-3c6c-4056-8581-3b693c168b53-kubelet-dir\") pod \"csi-node-driver-4b5m5\" (UID: \"48026b39-3c6c-4056-8581-3b693c168b53\") " pod="calico-system/csi-node-driver-4b5m5" Jun 20 19:15:52.044074 kubelet[3151]: E0620 19:15:52.044056 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.044074 kubelet[3151]: W0620 19:15:52.044063 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.044139 kubelet[3151]: E0620 19:15:52.044074 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.044216 kubelet[3151]: E0620 19:15:52.044206 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.044216 kubelet[3151]: W0620 19:15:52.044214 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.044279 kubelet[3151]: E0620 19:15:52.044223 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.044375 kubelet[3151]: E0620 19:15:52.044359 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.044375 kubelet[3151]: W0620 19:15:52.044368 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.044473 kubelet[3151]: E0620 19:15:52.044384 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.044473 kubelet[3151]: I0620 19:15:52.044402 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9d7m\" (UniqueName: \"kubernetes.io/projected/48026b39-3c6c-4056-8581-3b693c168b53-kube-api-access-t9d7m\") pod \"csi-node-driver-4b5m5\" (UID: \"48026b39-3c6c-4056-8581-3b693c168b53\") " pod="calico-system/csi-node-driver-4b5m5" Jun 20 19:15:52.044613 kubelet[3151]: E0620 19:15:52.044569 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.044613 kubelet[3151]: W0620 19:15:52.044576 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.044613 kubelet[3151]: E0620 19:15:52.044593 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.044613 kubelet[3151]: I0620 19:15:52.044608 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48026b39-3c6c-4056-8581-3b693c168b53-registration-dir\") pod \"csi-node-driver-4b5m5\" (UID: \"48026b39-3c6c-4056-8581-3b693c168b53\") " pod="calico-system/csi-node-driver-4b5m5" Jun 20 19:15:52.044852 kubelet[3151]: E0620 19:15:52.044757 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.044852 kubelet[3151]: W0620 19:15:52.044764 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.044852 kubelet[3151]: E0620 19:15:52.044777 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.045169 kubelet[3151]: E0620 19:15:52.044891 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.045169 kubelet[3151]: W0620 19:15:52.044899 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.045169 kubelet[3151]: E0620 19:15:52.044911 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.045169 kubelet[3151]: E0620 19:15:52.045017 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.045169 kubelet[3151]: W0620 19:15:52.045022 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.045169 kubelet[3151]: E0620 19:15:52.045031 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.045430 kubelet[3151]: E0620 19:15:52.045418 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.045552 kubelet[3151]: W0620 19:15:52.045473 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.045552 kubelet[3151]: E0620 19:15:52.045498 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.045917 kubelet[3151]: E0620 19:15:52.045904 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.046084 kubelet[3151]: W0620 19:15:52.046009 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.046084 kubelet[3151]: E0620 19:15:52.046041 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.046451 kubelet[3151]: E0620 19:15:52.046440 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.046592 kubelet[3151]: W0620 19:15:52.046515 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.046592 kubelet[3151]: E0620 19:15:52.046530 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.147489 kubelet[3151]: E0620 19:15:52.146785 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.147489 kubelet[3151]: W0620 19:15:52.146821 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.147489 kubelet[3151]: E0620 19:15:52.146875 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.147489 kubelet[3151]: E0620 19:15:52.147106 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.147489 kubelet[3151]: W0620 19:15:52.147128 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.147489 kubelet[3151]: E0620 19:15:52.147141 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.147489 kubelet[3151]: E0620 19:15:52.147291 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.147489 kubelet[3151]: W0620 19:15:52.147297 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.147489 kubelet[3151]: E0620 19:15:52.147304 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.148242 kubelet[3151]: E0620 19:15:52.147901 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.148242 kubelet[3151]: W0620 19:15:52.147923 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.148242 kubelet[3151]: E0620 19:15:52.148000 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.148690 kubelet[3151]: E0620 19:15:52.148678 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.148908 kubelet[3151]: W0620 19:15:52.148897 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.149201 kubelet[3151]: E0620 19:15:52.149098 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.149412 kubelet[3151]: E0620 19:15:52.149384 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.149581 kubelet[3151]: W0620 19:15:52.149457 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.150788 kubelet[3151]: E0620 19:15:52.150151 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.150788 kubelet[3151]: E0620 19:15:52.150418 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.150788 kubelet[3151]: W0620 19:15:52.150426 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.150788 kubelet[3151]: E0620 19:15:52.150524 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.150788 kubelet[3151]: E0620 19:15:52.150564 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.150788 kubelet[3151]: W0620 19:15:52.150569 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.150788 kubelet[3151]: E0620 19:15:52.150708 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.151285 kubelet[3151]: E0620 19:15:52.151048 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.151285 kubelet[3151]: W0620 19:15:52.151057 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.151285 kubelet[3151]: E0620 19:15:52.151116 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.151285 kubelet[3151]: E0620 19:15:52.151207 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.151285 kubelet[3151]: W0620 19:15:52.151211 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.151285 kubelet[3151]: E0620 19:15:52.151220 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.151562 kubelet[3151]: E0620 19:15:52.151476 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.151562 kubelet[3151]: W0620 19:15:52.151482 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.151562 kubelet[3151]: E0620 19:15:52.151489 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.151726 kubelet[3151]: E0620 19:15:52.151680 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.151726 kubelet[3151]: W0620 19:15:52.151688 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.151816 kubelet[3151]: E0620 19:15:52.151782 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.151946 kubelet[3151]: E0620 19:15:52.151939 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.152030 kubelet[3151]: W0620 19:15:52.152023 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.152108 kubelet[3151]: E0620 19:15:52.152100 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.152231 kubelet[3151]: E0620 19:15:52.152226 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.152274 kubelet[3151]: W0620 19:15:52.152264 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.152374 kubelet[3151]: E0620 19:15:52.152367 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.152465 kubelet[3151]: E0620 19:15:52.152452 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.152465 kubelet[3151]: W0620 19:15:52.152458 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.152607 kubelet[3151]: E0620 19:15:52.152596 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.152675 kubelet[3151]: E0620 19:15:52.152671 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.152731 kubelet[3151]: W0620 19:15:52.152709 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.152873 kubelet[3151]: E0620 19:15:52.152768 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.153010 kubelet[3151]: E0620 19:15:52.153004 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.153051 kubelet[3151]: W0620 19:15:52.153045 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.153141 kubelet[3151]: E0620 19:15:52.153098 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.153275 kubelet[3151]: E0620 19:15:52.153260 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.153275 kubelet[3151]: W0620 19:15:52.153267 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.153407 kubelet[3151]: E0620 19:15:52.153333 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.153708 kubelet[3151]: E0620 19:15:52.153612 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.153708 kubelet[3151]: W0620 19:15:52.153619 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.153708 kubelet[3151]: E0620 19:15:52.153633 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.153880 kubelet[3151]: E0620 19:15:52.153874 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.153932 kubelet[3151]: W0620 19:15:52.153905 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.154041 kubelet[3151]: E0620 19:15:52.153982 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.154140 kubelet[3151]: E0620 19:15:52.154126 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.154140 kubelet[3151]: W0620 19:15:52.154133 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.154303 kubelet[3151]: E0620 19:15:52.154279 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.154402 kubelet[3151]: E0620 19:15:52.154365 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.154402 kubelet[3151]: W0620 19:15:52.154371 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.154481 kubelet[3151]: E0620 19:15:52.154462 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.154610 kubelet[3151]: E0620 19:15:52.154591 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.154610 kubelet[3151]: W0620 19:15:52.154600 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.154724 kubelet[3151]: E0620 19:15:52.154672 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.154915 kubelet[3151]: E0620 19:15:52.154850 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.154915 kubelet[3151]: W0620 19:15:52.154857 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.154915 kubelet[3151]: E0620 19:15:52.154871 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.155157 kubelet[3151]: E0620 19:15:52.155103 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.155157 kubelet[3151]: W0620 19:15:52.155110 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.155157 kubelet[3151]: E0620 19:15:52.155118 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.163208 kubelet[3151]: E0620 19:15:52.163151 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:15:52.163208 kubelet[3151]: W0620 19:15:52.163166 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:15:52.163208 kubelet[3151]: E0620 19:15:52.163178 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:15:52.981631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562795034.mount: Deactivated successfully. Jun 20 19:15:53.733986 kubelet[3151]: E0620 19:15:53.733916 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:15:55.734117 kubelet[3151]: E0620 19:15:55.733982 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:15:57.734419 kubelet[3151]: E0620 19:15:57.734366 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:15:59.734017 kubelet[3151]: E0620 19:15:59.733968 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:01.733999 kubelet[3151]: E0620 19:16:01.733960 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:03.733645 kubelet[3151]: E0620 19:16:03.733597 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:05.733983 kubelet[3151]: E0620 19:16:05.733929 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:07.733675 kubelet[3151]: E0620 19:16:07.733621 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:09.733695 kubelet[3151]: E0620 19:16:09.733641 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:11.266776 containerd[1726]: time="2025-06-20T19:16:11.266723724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:11.269125 containerd[1726]: time="2025-06-20T19:16:11.269084919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:16:11.272279 containerd[1726]: time="2025-06-20T19:16:11.272237057Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:11.275428 containerd[1726]: time="2025-06-20T19:16:11.275381131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:11.275769 containerd[1726]: time="2025-06-20T19:16:11.275743639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 19.59267716s" Jun 20 19:16:11.275817 containerd[1726]: time="2025-06-20T19:16:11.275778857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:16:11.277649 containerd[1726]: time="2025-06-20T19:16:11.277620685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:16:11.291738 containerd[1726]: time="2025-06-20T19:16:11.291708186Z" level=info msg="CreateContainer within sandbox \"5cee88e4dde21d3f0fcb66e87653a63a5d37821307d7b7f3f0326b354d85f780\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:16:11.308845 containerd[1726]: time="2025-06-20T19:16:11.307633354Z" level=info msg="Container 8c4e718be7c5c3081345764d036baed92b5c91bd7bf3393b64393676f448f073: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:11.325156 containerd[1726]: time="2025-06-20T19:16:11.325126484Z" level=info msg="CreateContainer within sandbox \"5cee88e4dde21d3f0fcb66e87653a63a5d37821307d7b7f3f0326b354d85f780\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8c4e718be7c5c3081345764d036baed92b5c91bd7bf3393b64393676f448f073\"" Jun 20 19:16:11.325627 containerd[1726]: time="2025-06-20T19:16:11.325606192Z" level=info msg="StartContainer for \"8c4e718be7c5c3081345764d036baed92b5c91bd7bf3393b64393676f448f073\"" Jun 20 19:16:11.326842 containerd[1726]: time="2025-06-20T19:16:11.326794915Z" level=info msg="connecting to shim 8c4e718be7c5c3081345764d036baed92b5c91bd7bf3393b64393676f448f073" address="unix:///run/containerd/s/a0888431a015d46992948a290d6d3819e2adacec76d46113cdbc010023f9cce4" protocol=ttrpc version=3 Jun 20 19:16:11.346976 systemd[1]: Started cri-containerd-8c4e718be7c5c3081345764d036baed92b5c91bd7bf3393b64393676f448f073.scope - libcontainer container 8c4e718be7c5c3081345764d036baed92b5c91bd7bf3393b64393676f448f073. Jun 20 19:16:11.395511 containerd[1726]: time="2025-06-20T19:16:11.395410396Z" level=info msg="StartContainer for \"8c4e718be7c5c3081345764d036baed92b5c91bd7bf3393b64393676f448f073\" returns successfully" Jun 20 19:16:11.733587 kubelet[3151]: E0620 19:16:11.733525 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:11.865390 kubelet[3151]: E0620 19:16:11.865330 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.865390 kubelet[3151]: W0620 19:16:11.865355 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.865870 kubelet[3151]: E0620 19:16:11.865473 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.865870 kubelet[3151]: E0620 19:16:11.865812 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.866042 kubelet[3151]: W0620 19:16:11.865821 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.866042 kubelet[3151]: E0620 19:16:11.865952 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.866258 kubelet[3151]: E0620 19:16:11.866206 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.866258 kubelet[3151]: W0620 19:16:11.866217 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.866258 kubelet[3151]: E0620 19:16:11.866229 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.866580 kubelet[3151]: E0620 19:16:11.866546 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.866580 kubelet[3151]: W0620 19:16:11.866573 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.866658 kubelet[3151]: E0620 19:16:11.866585 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.867230 kubelet[3151]: E0620 19:16:11.867153 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.867230 kubelet[3151]: W0620 19:16:11.867174 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.867230 kubelet[3151]: E0620 19:16:11.867198 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.867482 kubelet[3151]: E0620 19:16:11.867476 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.867525 kubelet[3151]: W0620 19:16:11.867519 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.867525 kubelet[3151]: E0620 19:16:11.867548 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.867804 kubelet[3151]: E0620 19:16:11.867740 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.867804 kubelet[3151]: W0620 19:16:11.867748 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.867804 kubelet[3151]: E0620 19:16:11.867760 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.867977 kubelet[3151]: E0620 19:16:11.867956 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.868086 kubelet[3151]: W0620 19:16:11.867964 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.868086 kubelet[3151]: E0620 19:16:11.868018 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.868204 kubelet[3151]: E0620 19:16:11.868197 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.868311 kubelet[3151]: W0620 19:16:11.868239 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.868311 kubelet[3151]: E0620 19:16:11.868247 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.868438 kubelet[3151]: E0620 19:16:11.868402 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.868438 kubelet[3151]: W0620 19:16:11.868409 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.868438 kubelet[3151]: E0620 19:16:11.868416 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.868655 kubelet[3151]: E0620 19:16:11.868599 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.868655 kubelet[3151]: W0620 19:16:11.868606 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.868655 kubelet[3151]: E0620 19:16:11.868613 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.868808 kubelet[3151]: E0620 19:16:11.868771 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.868808 kubelet[3151]: W0620 19:16:11.868778 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.868808 kubelet[3151]: E0620 19:16:11.868787 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.869058 kubelet[3151]: E0620 19:16:11.868988 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.869058 kubelet[3151]: W0620 19:16:11.868995 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.869058 kubelet[3151]: E0620 19:16:11.869007 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.869212 kubelet[3151]: E0620 19:16:11.869181 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.869212 kubelet[3151]: W0620 19:16:11.869188 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.869212 kubelet[3151]: E0620 19:16:11.869195 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.869448 kubelet[3151]: E0620 19:16:11.869384 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.869448 kubelet[3151]: W0620 19:16:11.869391 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.869448 kubelet[3151]: E0620 19:16:11.869398 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.873408 kubelet[3151]: I0620 19:16:11.873170 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74fdc8bb75-kzq7b" podStartSLOduration=1.278899601 podStartE2EDuration="20.873154569s" podCreationTimestamp="2025-06-20 19:15:51 +0000 UTC" firstStartedPulling="2025-06-20 19:15:51.682353837 +0000 UTC m=+19.037252768" lastFinishedPulling="2025-06-20 19:16:11.276608793 +0000 UTC m=+38.631507736" observedRunningTime="2025-06-20 19:16:11.859950198 +0000 UTC m=+39.214849142" watchObservedRunningTime="2025-06-20 19:16:11.873154569 +0000 UTC m=+39.228053514" Jun 20 19:16:11.968005 kubelet[3151]: E0620 19:16:11.967971 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.968005 kubelet[3151]: W0620 19:16:11.967996 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.968203 kubelet[3151]: E0620 19:16:11.968019 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.968203 kubelet[3151]: E0620 19:16:11.968171 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.968203 kubelet[3151]: W0620 19:16:11.968177 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.968203 kubelet[3151]: E0620 19:16:11.968185 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.968353 kubelet[3151]: E0620 19:16:11.968339 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.968353 kubelet[3151]: W0620 19:16:11.968351 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.968414 kubelet[3151]: E0620 19:16:11.968365 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.968474 kubelet[3151]: E0620 19:16:11.968463 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.968474 kubelet[3151]: W0620 19:16:11.968471 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.968527 kubelet[3151]: E0620 19:16:11.968478 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.968592 kubelet[3151]: E0620 19:16:11.968581 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.968592 kubelet[3151]: W0620 19:16:11.968589 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.968645 kubelet[3151]: E0620 19:16:11.968598 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.968758 kubelet[3151]: E0620 19:16:11.968732 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.968758 kubelet[3151]: W0620 19:16:11.968754 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.968821 kubelet[3151]: E0620 19:16:11.968772 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.968955 kubelet[3151]: E0620 19:16:11.968945 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.968989 kubelet[3151]: W0620 19:16:11.968956 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.968989 kubelet[3151]: E0620 19:16:11.968965 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.969156 kubelet[3151]: E0620 19:16:11.969093 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.969156 kubelet[3151]: W0620 19:16:11.969100 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.969156 kubelet[3151]: E0620 19:16:11.969107 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.969253 kubelet[3151]: E0620 19:16:11.969196 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.969253 kubelet[3151]: W0620 19:16:11.969201 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.969253 kubelet[3151]: E0620 19:16:11.969211 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.969377 kubelet[3151]: E0620 19:16:11.969292 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.969377 kubelet[3151]: W0620 19:16:11.969297 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.969377 kubelet[3151]: E0620 19:16:11.969307 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.969462 kubelet[3151]: E0620 19:16:11.969420 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.969462 kubelet[3151]: W0620 19:16:11.969425 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.969462 kubelet[3151]: E0620 19:16:11.969437 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.969820 kubelet[3151]: E0620 19:16:11.969713 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.969820 kubelet[3151]: W0620 19:16:11.969725 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.969820 kubelet[3151]: E0620 19:16:11.969738 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.969941 kubelet[3151]: E0620 19:16:11.969856 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.969941 kubelet[3151]: W0620 19:16:11.969863 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.969992 kubelet[3151]: E0620 19:16:11.969946 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.969992 kubelet[3151]: W0620 19:16:11.969951 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.969992 kubelet[3151]: E0620 19:16:11.969958 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.970060 kubelet[3151]: E0620 19:16:11.970041 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.970060 kubelet[3151]: W0620 19:16:11.970046 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.970060 kubelet[3151]: E0620 19:16:11.970052 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.970132 kubelet[3151]: E0620 19:16:11.970123 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.970132 kubelet[3151]: W0620 19:16:11.970128 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.970174 kubelet[3151]: E0620 19:16:11.970133 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.970269 kubelet[3151]: E0620 19:16:11.970234 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.970269 kubelet[3151]: W0620 19:16:11.970240 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.970269 kubelet[3151]: E0620 19:16:11.970246 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.970269 kubelet[3151]: E0620 19:16:11.970247 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:11.970518 kubelet[3151]: E0620 19:16:11.970480 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:11.970518 kubelet[3151]: W0620 19:16:11.970505 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:11.970518 kubelet[3151]: E0620 19:16:11.970512 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.872577 containerd[1726]: time="2025-06-20T19:16:12.872523556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:12.874791 kubelet[3151]: E0620 19:16:12.874657 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.874791 kubelet[3151]: W0620 19:16:12.874692 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.874791 kubelet[3151]: E0620 19:16:12.874719 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875151 kubelet[3151]: E0620 19:16:12.874870 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875151 kubelet[3151]: W0620 19:16:12.874876 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875151 kubelet[3151]: E0620 19:16:12.874886 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875151 kubelet[3151]: E0620 19:16:12.874988 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875151 kubelet[3151]: W0620 19:16:12.874993 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875151 kubelet[3151]: E0620 19:16:12.875000 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875151 kubelet[3151]: E0620 19:16:12.875087 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875151 kubelet[3151]: W0620 19:16:12.875092 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875151 kubelet[3151]: E0620 19:16:12.875099 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875369 kubelet[3151]: E0620 19:16:12.875196 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875369 kubelet[3151]: W0620 19:16:12.875201 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875369 kubelet[3151]: E0620 19:16:12.875208 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875369 kubelet[3151]: E0620 19:16:12.875302 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875369 kubelet[3151]: W0620 19:16:12.875307 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875369 kubelet[3151]: E0620 19:16:12.875313 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875515 kubelet[3151]: E0620 19:16:12.875399 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875515 kubelet[3151]: W0620 19:16:12.875404 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875515 kubelet[3151]: E0620 19:16:12.875411 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875515 kubelet[3151]: E0620 19:16:12.875494 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875515 kubelet[3151]: W0620 19:16:12.875499 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875515 kubelet[3151]: E0620 19:16:12.875505 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875651 kubelet[3151]: E0620 19:16:12.875616 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875651 kubelet[3151]: W0620 19:16:12.875620 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875651 kubelet[3151]: E0620 19:16:12.875626 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875723 kubelet[3151]: E0620 19:16:12.875713 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875723 kubelet[3151]: W0620 19:16:12.875718 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875768 kubelet[3151]: E0620 19:16:12.875724 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875979 kubelet[3151]: E0620 19:16:12.875808 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875979 kubelet[3151]: W0620 19:16:12.875814 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875979 kubelet[3151]: E0620 19:16:12.875819 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.875979 kubelet[3151]: E0620 19:16:12.875935 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.875979 kubelet[3151]: W0620 19:16:12.875940 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.875979 kubelet[3151]: E0620 19:16:12.875948 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.876388 kubelet[3151]: E0620 19:16:12.876301 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.876388 kubelet[3151]: W0620 19:16:12.876312 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.876388 kubelet[3151]: E0620 19:16:12.876324 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.876895 kubelet[3151]: E0620 19:16:12.876881 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.876935 kubelet[3151]: W0620 19:16:12.876896 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.876935 kubelet[3151]: E0620 19:16:12.876907 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.877088 kubelet[3151]: E0620 19:16:12.877011 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.877088 kubelet[3151]: W0620 19:16:12.877018 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.877088 kubelet[3151]: E0620 19:16:12.877024 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.877537 containerd[1726]: time="2025-06-20T19:16:12.877508715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:16:12.880405 containerd[1726]: time="2025-06-20T19:16:12.880319091Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:12.884372 containerd[1726]: time="2025-06-20T19:16:12.884313790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:12.884793 containerd[1726]: time="2025-06-20T19:16:12.884672735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.607012702s" Jun 20 19:16:12.884793 containerd[1726]: time="2025-06-20T19:16:12.884713486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:16:12.887049 containerd[1726]: time="2025-06-20T19:16:12.887017970Z" level=info msg="CreateContainer within sandbox \"6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:16:12.906092 containerd[1726]: time="2025-06-20T19:16:12.906057167Z" level=info msg="Container 8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:12.912405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872857490.mount: Deactivated successfully. Jun 20 19:16:12.923970 containerd[1726]: time="2025-06-20T19:16:12.923939656Z" level=info msg="CreateContainer within sandbox \"6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be\"" Jun 20 19:16:12.925863 containerd[1726]: time="2025-06-20T19:16:12.924436768Z" level=info msg="StartContainer for \"8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be\"" Jun 20 19:16:12.925935 containerd[1726]: time="2025-06-20T19:16:12.925853177Z" level=info msg="connecting to shim 8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be" address="unix:///run/containerd/s/98f90904052c37598a619e40134a52206f6473d54e880cf63f354221d4f0a7d6" protocol=ttrpc version=3 Jun 20 19:16:12.949992 systemd[1]: Started cri-containerd-8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be.scope - libcontainer container 8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be. Jun 20 19:16:12.974110 kubelet[3151]: E0620 19:16:12.974088 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.974293 kubelet[3151]: W0620 19:16:12.974149 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.974293 kubelet[3151]: E0620 19:16:12.974173 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.974631 kubelet[3151]: E0620 19:16:12.974619 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.975025 kubelet[3151]: W0620 19:16:12.974799 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.977906 kubelet[3151]: E0620 19:16:12.975141 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.977906 kubelet[3151]: W0620 19:16:12.975151 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.977906 kubelet[3151]: E0620 19:16:12.975164 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.977906 kubelet[3151]: E0620 19:16:12.975327 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.977906 kubelet[3151]: W0620 19:16:12.975333 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.977906 kubelet[3151]: E0620 19:16:12.975340 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.977906 kubelet[3151]: E0620 19:16:12.975281 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.977906 kubelet[3151]: E0620 19:16:12.975548 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.977906 kubelet[3151]: W0620 19:16:12.975554 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.977906 kubelet[3151]: E0620 19:16:12.975570 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978218 kubelet[3151]: E0620 19:16:12.975742 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978218 kubelet[3151]: W0620 19:16:12.975748 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978218 kubelet[3151]: E0620 19:16:12.975872 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978218 kubelet[3151]: W0620 19:16:12.975878 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978218 kubelet[3151]: E0620 19:16:12.975886 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978218 kubelet[3151]: E0620 19:16:12.976005 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978218 kubelet[3151]: E0620 19:16:12.976105 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978218 kubelet[3151]: W0620 19:16:12.976110 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978218 kubelet[3151]: E0620 19:16:12.976116 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978218 kubelet[3151]: E0620 19:16:12.976309 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978468 kubelet[3151]: W0620 19:16:12.976331 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978468 kubelet[3151]: E0620 19:16:12.976345 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978468 kubelet[3151]: E0620 19:16:12.976444 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978468 kubelet[3151]: W0620 19:16:12.976450 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978468 kubelet[3151]: E0620 19:16:12.976461 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978468 kubelet[3151]: E0620 19:16:12.976571 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978468 kubelet[3151]: W0620 19:16:12.976577 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978468 kubelet[3151]: E0620 19:16:12.976587 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978468 kubelet[3151]: E0620 19:16:12.976691 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978468 kubelet[3151]: W0620 19:16:12.976696 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978730 kubelet[3151]: E0620 19:16:12.976704 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978730 kubelet[3151]: E0620 19:16:12.976921 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978730 kubelet[3151]: W0620 19:16:12.976930 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978730 kubelet[3151]: E0620 19:16:12.976962 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978730 kubelet[3151]: E0620 19:16:12.977238 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978730 kubelet[3151]: W0620 19:16:12.977295 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978730 kubelet[3151]: E0620 19:16:12.977313 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.978730 kubelet[3151]: E0620 19:16:12.977526 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.978730 kubelet[3151]: W0620 19:16:12.977534 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.978730 kubelet[3151]: E0620 19:16:12.977545 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.979198 kubelet[3151]: E0620 19:16:12.977957 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.979198 kubelet[3151]: W0620 19:16:12.977966 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.979198 kubelet[3151]: E0620 19:16:12.977977 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.979272 kubelet[3151]: E0620 19:16:12.979244 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.979272 kubelet[3151]: W0620 19:16:12.979257 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.979317 kubelet[3151]: E0620 19:16:12.979270 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.980166 kubelet[3151]: E0620 19:16:12.980145 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:16:12.980166 kubelet[3151]: W0620 19:16:12.980160 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:16:12.980258 kubelet[3151]: E0620 19:16:12.980172 3151 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:16:12.982960 containerd[1726]: time="2025-06-20T19:16:12.982856717Z" level=info msg="StartContainer for \"8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be\" returns successfully" Jun 20 19:16:12.987538 systemd[1]: cri-containerd-8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be.scope: Deactivated successfully. Jun 20 19:16:12.989890 containerd[1726]: time="2025-06-20T19:16:12.989849988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be\" id:\"8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be\" pid:3846 exited_at:{seconds:1750446972 nanos:989140413}" Jun 20 19:16:12.989975 containerd[1726]: time="2025-06-20T19:16:12.989905865Z" level=info msg="received exit event container_id:\"8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be\" id:\"8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be\" pid:3846 exited_at:{seconds:1750446972 nanos:989140413}" Jun 20 19:16:13.007748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c71d2be3d4bfe2efff23c84ddfc9adae42f043623fcb932d8597b3fb15683be-rootfs.mount: Deactivated successfully. Jun 20 19:16:13.734408 kubelet[3151]: E0620 19:16:13.734359 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:13.854230 containerd[1726]: time="2025-06-20T19:16:13.854150439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:16:15.734411 kubelet[3151]: E0620 19:16:15.734261 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:17.521059 containerd[1726]: time="2025-06-20T19:16:17.521003047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:17.523270 containerd[1726]: time="2025-06-20T19:16:17.523225351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:16:17.525865 containerd[1726]: time="2025-06-20T19:16:17.525794080Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:17.530040 containerd[1726]: time="2025-06-20T19:16:17.529973289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:17.530689 containerd[1726]: time="2025-06-20T19:16:17.530534547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 3.676335146s" Jun 20 19:16:17.530689 containerd[1726]: time="2025-06-20T19:16:17.530565825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:16:17.533295 containerd[1726]: time="2025-06-20T19:16:17.533256475Z" level=info msg="CreateContainer within sandbox \"6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:16:17.550650 containerd[1726]: time="2025-06-20T19:16:17.550615995Z" level=info msg="Container 853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:17.556901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862992486.mount: Deactivated successfully. Jun 20 19:16:17.569686 containerd[1726]: time="2025-06-20T19:16:17.569655521Z" level=info msg="CreateContainer within sandbox \"6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058\"" Jun 20 19:16:17.570388 containerd[1726]: time="2025-06-20T19:16:17.570186520Z" level=info msg="StartContainer for \"853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058\"" Jun 20 19:16:17.571958 containerd[1726]: time="2025-06-20T19:16:17.571775527Z" level=info msg="connecting to shim 853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058" address="unix:///run/containerd/s/98f90904052c37598a619e40134a52206f6473d54e880cf63f354221d4f0a7d6" protocol=ttrpc version=3 Jun 20 19:16:17.591990 systemd[1]: Started cri-containerd-853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058.scope - libcontainer container 853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058. Jun 20 19:16:17.627939 containerd[1726]: time="2025-06-20T19:16:17.627869971Z" level=info msg="StartContainer for \"853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058\" returns successfully" Jun 20 19:16:17.733845 kubelet[3151]: E0620 19:16:17.733786 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:18.877023 containerd[1726]: time="2025-06-20T19:16:18.876950235Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:16:18.878813 systemd[1]: cri-containerd-853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058.scope: Deactivated successfully. Jun 20 19:16:18.879266 systemd[1]: cri-containerd-853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058.scope: Consumed 425ms CPU time, 191.4M memory peak, 171.2M written to disk. Jun 20 19:16:18.881168 containerd[1726]: time="2025-06-20T19:16:18.881132455Z" level=info msg="received exit event container_id:\"853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058\" id:\"853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058\" pid:3921 exited_at:{seconds:1750446978 nanos:880377371}" Jun 20 19:16:18.881310 containerd[1726]: time="2025-06-20T19:16:18.881184691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058\" id:\"853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058\" pid:3921 exited_at:{seconds:1750446978 nanos:880377371}" Jun 20 19:16:18.899623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-853d9edd5241c3422a0b88978c7b32dbbeff253fbf49040e3949c156b8aa3058-rootfs.mount: Deactivated successfully. Jun 20 19:16:18.906981 kubelet[3151]: I0620 19:16:18.906955 3151 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 19:16:18.952871 systemd[1]: Created slice kubepods-burstable-podea2fa56d_4ea5_4fae_a52f_930d8c9fbc53.slice - libcontainer container kubepods-burstable-podea2fa56d_4ea5_4fae_a52f_930d8c9fbc53.slice. Jun 20 19:16:18.969281 systemd[1]: Created slice kubepods-besteffort-pod26571497_5c9d_4e20_967a_26a95bf40e1f.slice - libcontainer container kubepods-besteffort-pod26571497_5c9d_4e20_967a_26a95bf40e1f.slice. Jun 20 19:16:18.994980 systemd[1]: Created slice kubepods-burstable-podb2972c76_0691_4f97_ac20_8d5147d7f50a.slice - libcontainer container kubepods-burstable-podb2972c76_0691_4f97_ac20_8d5147d7f50a.slice. Jun 20 19:16:19.004269 systemd[1]: Created slice kubepods-besteffort-pod51986139_459e_4222_b31e_78a26677e7c0.slice - libcontainer container kubepods-besteffort-pod51986139_459e_4222_b31e_78a26677e7c0.slice. Jun 20 19:16:19.016234 systemd[1]: Created slice kubepods-besteffort-pod16f3dad5_c3e2_404c_8ef5_6e879cea8e8b.slice - libcontainer container kubepods-besteffort-pod16f3dad5_c3e2_404c_8ef5_6e879cea8e8b.slice. Jun 20 19:16:19.023282 systemd[1]: Created slice kubepods-besteffort-pod76edfd16_5b35_43e9_9a66_a035afe11c46.slice - libcontainer container kubepods-besteffort-pod76edfd16_5b35_43e9_9a66_a035afe11c46.slice. Jun 20 19:16:19.030964 systemd[1]: Created slice kubepods-besteffort-pod8e45c42b_c96e_4058_a6aa_3130d834d808.slice - libcontainer container kubepods-besteffort-pod8e45c42b_c96e_4058_a6aa_3130d834d808.slice. Jun 20 19:16:19.038769 systemd[1]: Created slice kubepods-besteffort-podbb494847_e0c1_435c_8cf2_807c46e8aca7.slice - libcontainer container kubepods-besteffort-podbb494847_e0c1_435c_8cf2_807c46e8aca7.slice. Jun 20 19:16:19.115766 kubelet[3151]: I0620 19:16:19.115716 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tzxb\" (UniqueName: \"kubernetes.io/projected/51986139-459e-4222-b31e-78a26677e7c0-kube-api-access-8tzxb\") pod \"calico-apiserver-7fc66dd6d7-qdqzw\" (UID: \"51986139-459e-4222-b31e-78a26677e7c0\") " pod="calico-apiserver/calico-apiserver-7fc66dd6d7-qdqzw" Jun 20 19:16:19.115766 kubelet[3151]: I0620 19:16:19.115773 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2972c76-0691-4f97-ac20-8d5147d7f50a-config-volume\") pod \"coredns-7c65d6cfc9-pzkgt\" (UID: \"b2972c76-0691-4f97-ac20-8d5147d7f50a\") " pod="kube-system/coredns-7c65d6cfc9-pzkgt" Jun 20 19:16:19.115766 kubelet[3151]: I0620 19:16:19.115795 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bb494847-e0c1-435c-8cf2-807c46e8aca7-goldmane-key-pair\") pod \"goldmane-dc7b455cb-d49s2\" (UID: \"bb494847-e0c1-435c-8cf2-807c46e8aca7\") " pod="calico-system/goldmane-dc7b455cb-d49s2" Jun 20 19:16:19.116992 kubelet[3151]: I0620 19:16:19.115819 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e45c42b-c96e-4058-a6aa-3130d834d808-calico-apiserver-certs\") pod \"calico-apiserver-6675744cfd-8csx6\" (UID: \"8e45c42b-c96e-4058-a6aa-3130d834d808\") " pod="calico-apiserver/calico-apiserver-6675744cfd-8csx6" Jun 20 19:16:19.117182 kubelet[3151]: I0620 19:16:19.117005 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb494847-e0c1-435c-8cf2-807c46e8aca7-config\") pod \"goldmane-dc7b455cb-d49s2\" (UID: \"bb494847-e0c1-435c-8cf2-807c46e8aca7\") " pod="calico-system/goldmane-dc7b455cb-d49s2" Jun 20 19:16:19.117182 kubelet[3151]: I0620 19:16:19.117080 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphx8\" (UniqueName: \"kubernetes.io/projected/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b-kube-api-access-hphx8\") pod \"calico-apiserver-7fc66dd6d7-4s89b\" (UID: \"16f3dad5-c3e2-404c-8ef5-6e879cea8e8b\") " pod="calico-apiserver/calico-apiserver-7fc66dd6d7-4s89b" Jun 20 19:16:19.117182 kubelet[3151]: I0620 19:16:19.117123 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fvqp\" (UniqueName: \"kubernetes.io/projected/ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53-kube-api-access-2fvqp\") pod \"coredns-7c65d6cfc9-b5rj7\" (UID: \"ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53\") " pod="kube-system/coredns-7c65d6cfc9-b5rj7" Jun 20 19:16:19.117182 kubelet[3151]: I0620 19:16:19.117149 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8x8r\" (UniqueName: \"kubernetes.io/projected/b2972c76-0691-4f97-ac20-8d5147d7f50a-kube-api-access-m8x8r\") pod \"coredns-7c65d6cfc9-pzkgt\" (UID: \"b2972c76-0691-4f97-ac20-8d5147d7f50a\") " pod="kube-system/coredns-7c65d6cfc9-pzkgt" Jun 20 19:16:19.117182 kubelet[3151]: I0620 19:16:19.117171 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbx4\" (UniqueName: \"kubernetes.io/projected/8e45c42b-c96e-4058-a6aa-3130d834d808-kube-api-access-2xbx4\") pod \"calico-apiserver-6675744cfd-8csx6\" (UID: \"8e45c42b-c96e-4058-a6aa-3130d834d808\") " pod="calico-apiserver/calico-apiserver-6675744cfd-8csx6" Jun 20 19:16:19.117325 kubelet[3151]: I0620 19:16:19.117208 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26571497-5c9d-4e20-967a-26a95bf40e1f-tigera-ca-bundle\") pod \"calico-kube-controllers-7fcd5cccd8-wjrwq\" (UID: \"26571497-5c9d-4e20-967a-26a95bf40e1f\") " pod="calico-system/calico-kube-controllers-7fcd5cccd8-wjrwq" Jun 20 19:16:19.117325 kubelet[3151]: I0620 19:16:19.117228 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b-calico-apiserver-certs\") pod \"calico-apiserver-7fc66dd6d7-4s89b\" (UID: \"16f3dad5-c3e2-404c-8ef5-6e879cea8e8b\") " pod="calico-apiserver/calico-apiserver-7fc66dd6d7-4s89b" Jun 20 19:16:19.117325 kubelet[3151]: I0620 19:16:19.117274 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrcmf\" (UniqueName: \"kubernetes.io/projected/bb494847-e0c1-435c-8cf2-807c46e8aca7-kube-api-access-rrcmf\") pod \"goldmane-dc7b455cb-d49s2\" (UID: \"bb494847-e0c1-435c-8cf2-807c46e8aca7\") " pod="calico-system/goldmane-dc7b455cb-d49s2" Jun 20 19:16:19.117325 kubelet[3151]: I0620 19:16:19.117295 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76edfd16-5b35-43e9-9a66-a035afe11c46-whisker-ca-bundle\") pod \"whisker-b67f89bc9-79l47\" (UID: \"76edfd16-5b35-43e9-9a66-a035afe11c46\") " pod="calico-system/whisker-b67f89bc9-79l47" Jun 20 19:16:19.117421 kubelet[3151]: I0620 19:16:19.117313 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmw62\" (UniqueName: \"kubernetes.io/projected/76edfd16-5b35-43e9-9a66-a035afe11c46-kube-api-access-dmw62\") pod \"whisker-b67f89bc9-79l47\" (UID: \"76edfd16-5b35-43e9-9a66-a035afe11c46\") " pod="calico-system/whisker-b67f89bc9-79l47" Jun 20 19:16:19.117421 kubelet[3151]: I0620 19:16:19.117377 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dn99\" (UniqueName: \"kubernetes.io/projected/26571497-5c9d-4e20-967a-26a95bf40e1f-kube-api-access-6dn99\") pod \"calico-kube-controllers-7fcd5cccd8-wjrwq\" (UID: \"26571497-5c9d-4e20-967a-26a95bf40e1f\") " pod="calico-system/calico-kube-controllers-7fcd5cccd8-wjrwq" Jun 20 19:16:19.117472 kubelet[3151]: I0620 19:16:19.117427 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53-config-volume\") pod \"coredns-7c65d6cfc9-b5rj7\" (UID: \"ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53\") " pod="kube-system/coredns-7c65d6cfc9-b5rj7" Jun 20 19:16:19.117472 kubelet[3151]: I0620 19:16:19.117446 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51986139-459e-4222-b31e-78a26677e7c0-calico-apiserver-certs\") pod \"calico-apiserver-7fc66dd6d7-qdqzw\" (UID: \"51986139-459e-4222-b31e-78a26677e7c0\") " pod="calico-apiserver/calico-apiserver-7fc66dd6d7-qdqzw" Jun 20 19:16:19.117931 kubelet[3151]: I0620 19:16:19.117498 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb494847-e0c1-435c-8cf2-807c46e8aca7-goldmane-ca-bundle\") pod \"goldmane-dc7b455cb-d49s2\" (UID: \"bb494847-e0c1-435c-8cf2-807c46e8aca7\") " pod="calico-system/goldmane-dc7b455cb-d49s2" Jun 20 19:16:19.117931 kubelet[3151]: I0620 19:16:19.117538 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76edfd16-5b35-43e9-9a66-a035afe11c46-whisker-backend-key-pair\") pod \"whisker-b67f89bc9-79l47\" (UID: \"76edfd16-5b35-43e9-9a66-a035afe11c46\") " pod="calico-system/whisker-b67f89bc9-79l47" Jun 20 19:16:19.283408 containerd[1726]: time="2025-06-20T19:16:19.283123396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcd5cccd8-wjrwq,Uid:26571497-5c9d-4e20-967a-26a95bf40e1f,Namespace:calico-system,Attempt:0,}" Jun 20 19:16:19.301380 containerd[1726]: time="2025-06-20T19:16:19.301335438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzkgt,Uid:b2972c76-0691-4f97-ac20-8d5147d7f50a,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:19.316331 containerd[1726]: time="2025-06-20T19:16:19.316160037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc66dd6d7-qdqzw,Uid:51986139-459e-4222-b31e-78a26677e7c0,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:16:19.321215 containerd[1726]: time="2025-06-20T19:16:19.321177994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc66dd6d7-4s89b,Uid:16f3dad5-c3e2-404c-8ef5-6e879cea8e8b,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:16:19.325977 containerd[1726]: time="2025-06-20T19:16:19.325951621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b67f89bc9-79l47,Uid:76edfd16-5b35-43e9-9a66-a035afe11c46,Namespace:calico-system,Attempt:0,}" Jun 20 19:16:19.335495 containerd[1726]: time="2025-06-20T19:16:19.335473853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675744cfd-8csx6,Uid:8e45c42b-c96e-4058-a6aa-3130d834d808,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:16:19.342117 containerd[1726]: time="2025-06-20T19:16:19.342090499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-d49s2,Uid:bb494847-e0c1-435c-8cf2-807c46e8aca7,Namespace:calico-system,Attempt:0,}" Jun 20 19:16:19.560377 containerd[1726]: time="2025-06-20T19:16:19.560184119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b5rj7,Uid:ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:19.739488 systemd[1]: Created slice kubepods-besteffort-pod48026b39_3c6c_4056_8581_3b693c168b53.slice - libcontainer container kubepods-besteffort-pod48026b39_3c6c_4056_8581_3b693c168b53.slice. Jun 20 19:16:19.741925 containerd[1726]: time="2025-06-20T19:16:19.741881612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4b5m5,Uid:48026b39-3c6c-4056-8581-3b693c168b53,Namespace:calico-system,Attempt:0,}" Jun 20 19:16:19.892091 containerd[1726]: time="2025-06-20T19:16:19.891977215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:16:20.024213 containerd[1726]: time="2025-06-20T19:16:20.024164071Z" level=error msg="Failed to destroy network for sandbox \"fbe1a6eaa90d1bad33ef07a79590dadd21b5f5935c7e28f919d61f3d2a39a5fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.027161 systemd[1]: run-netns-cni\x2d4f614769\x2d0a67\x2d77c9\x2d7da2\x2d232fb0092b0a.mount: Deactivated successfully. Jun 20 19:16:20.038995 containerd[1726]: time="2025-06-20T19:16:20.038940265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcd5cccd8-wjrwq,Uid:26571497-5c9d-4e20-967a-26a95bf40e1f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe1a6eaa90d1bad33ef07a79590dadd21b5f5935c7e28f919d61f3d2a39a5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.039377 kubelet[3151]: E0620 19:16:20.039327 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe1a6eaa90d1bad33ef07a79590dadd21b5f5935c7e28f919d61f3d2a39a5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.039677 kubelet[3151]: E0620 19:16:20.039417 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe1a6eaa90d1bad33ef07a79590dadd21b5f5935c7e28f919d61f3d2a39a5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcd5cccd8-wjrwq" Jun 20 19:16:20.039677 kubelet[3151]: E0620 19:16:20.039442 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe1a6eaa90d1bad33ef07a79590dadd21b5f5935c7e28f919d61f3d2a39a5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcd5cccd8-wjrwq" Jun 20 19:16:20.039677 kubelet[3151]: E0620 19:16:20.039490 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fcd5cccd8-wjrwq_calico-system(26571497-5c9d-4e20-967a-26a95bf40e1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fcd5cccd8-wjrwq_calico-system(26571497-5c9d-4e20-967a-26a95bf40e1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbe1a6eaa90d1bad33ef07a79590dadd21b5f5935c7e28f919d61f3d2a39a5fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcd5cccd8-wjrwq" podUID="26571497-5c9d-4e20-967a-26a95bf40e1f" Jun 20 19:16:20.100592 containerd[1726]: time="2025-06-20T19:16:20.100524363Z" level=error msg="Failed to destroy network for sandbox \"366021936e5eac09fa1e62360aed4539e773c1077f34863c1a05f1c8ab25e24a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.104237 systemd[1]: run-netns-cni\x2d243a1d5c\x2dc978\x2d7116\x2dba28\x2d08288e0f7c50.mount: Deactivated successfully. Jun 20 19:16:20.112790 containerd[1726]: time="2025-06-20T19:16:20.112711412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b67f89bc9-79l47,Uid:76edfd16-5b35-43e9-9a66-a035afe11c46,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"366021936e5eac09fa1e62360aed4539e773c1077f34863c1a05f1c8ab25e24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.132007 containerd[1726]: time="2025-06-20T19:16:20.131958894Z" level=error msg="Failed to destroy network for sandbox \"23444e2168f1717d2c5a47d3baecf7aa489a41e9b6ecb4af1a5d0eecc9a4201d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.137762 systemd[1]: run-netns-cni\x2db41cb96b\x2dd652\x2d909c\x2d2a27\x2d348483a43708.mount: Deactivated successfully. Jun 20 19:16:20.141260 kubelet[3151]: E0620 19:16:20.140160 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366021936e5eac09fa1e62360aed4539e773c1077f34863c1a05f1c8ab25e24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.141260 kubelet[3151]: E0620 19:16:20.140913 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366021936e5eac09fa1e62360aed4539e773c1077f34863c1a05f1c8ab25e24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b67f89bc9-79l47" Jun 20 19:16:20.141260 kubelet[3151]: E0620 19:16:20.140966 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366021936e5eac09fa1e62360aed4539e773c1077f34863c1a05f1c8ab25e24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b67f89bc9-79l47" Jun 20 19:16:20.141425 kubelet[3151]: E0620 19:16:20.141041 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b67f89bc9-79l47_calico-system(76edfd16-5b35-43e9-9a66-a035afe11c46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b67f89bc9-79l47_calico-system(76edfd16-5b35-43e9-9a66-a035afe11c46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"366021936e5eac09fa1e62360aed4539e773c1077f34863c1a05f1c8ab25e24a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b67f89bc9-79l47" podUID="76edfd16-5b35-43e9-9a66-a035afe11c46" Jun 20 19:16:20.146549 containerd[1726]: time="2025-06-20T19:16:20.146312749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzkgt,Uid:b2972c76-0691-4f97-ac20-8d5147d7f50a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23444e2168f1717d2c5a47d3baecf7aa489a41e9b6ecb4af1a5d0eecc9a4201d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.147661 kubelet[3151]: E0620 19:16:20.147546 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23444e2168f1717d2c5a47d3baecf7aa489a41e9b6ecb4af1a5d0eecc9a4201d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.147661 kubelet[3151]: E0620 19:16:20.147606 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23444e2168f1717d2c5a47d3baecf7aa489a41e9b6ecb4af1a5d0eecc9a4201d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pzkgt" Jun 20 19:16:20.149369 kubelet[3151]: E0620 19:16:20.147709 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23444e2168f1717d2c5a47d3baecf7aa489a41e9b6ecb4af1a5d0eecc9a4201d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pzkgt" Jun 20 19:16:20.149369 kubelet[3151]: E0620 19:16:20.147757 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pzkgt_kube-system(b2972c76-0691-4f97-ac20-8d5147d7f50a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pzkgt_kube-system(b2972c76-0691-4f97-ac20-8d5147d7f50a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23444e2168f1717d2c5a47d3baecf7aa489a41e9b6ecb4af1a5d0eecc9a4201d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pzkgt" podUID="b2972c76-0691-4f97-ac20-8d5147d7f50a" Jun 20 19:16:20.159450 containerd[1726]: time="2025-06-20T19:16:20.159408832Z" level=error msg="Failed to destroy network for sandbox \"b47c83078fe27a42f77480f3b87139c654221b0d421556e251418b55a8dc4a1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.161449 containerd[1726]: time="2025-06-20T19:16:20.161406902Z" level=error msg="Failed to destroy network for sandbox \"6db5ed000fa0ca0dbd7d3cd7360f4a8453aa8311955884e988d924ee7cea7ba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.164222 containerd[1726]: time="2025-06-20T19:16:20.164099958Z" level=error msg="Failed to destroy network for sandbox \"18826350b644a1e2d4a315a5841ad35988dc1a5173bb34b3207cbca0e73ebfd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.165339 systemd[1]: run-netns-cni\x2df56d68b8\x2df74f\x2dae69\x2df4c3\x2dfc88b54d419d.mount: Deactivated successfully. Jun 20 19:16:20.171132 containerd[1726]: time="2025-06-20T19:16:20.171090826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc66dd6d7-4s89b,Uid:16f3dad5-c3e2-404c-8ef5-6e879cea8e8b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47c83078fe27a42f77480f3b87139c654221b0d421556e251418b55a8dc4a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.171765 kubelet[3151]: E0620 19:16:20.171325 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47c83078fe27a42f77480f3b87139c654221b0d421556e251418b55a8dc4a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.171765 kubelet[3151]: E0620 19:16:20.171384 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47c83078fe27a42f77480f3b87139c654221b0d421556e251418b55a8dc4a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fc66dd6d7-4s89b" Jun 20 19:16:20.171765 kubelet[3151]: E0620 19:16:20.171405 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47c83078fe27a42f77480f3b87139c654221b0d421556e251418b55a8dc4a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fc66dd6d7-4s89b" Jun 20 19:16:20.173880 kubelet[3151]: E0620 19:16:20.171458 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fc66dd6d7-4s89b_calico-apiserver(16f3dad5-c3e2-404c-8ef5-6e879cea8e8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fc66dd6d7-4s89b_calico-apiserver(16f3dad5-c3e2-404c-8ef5-6e879cea8e8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b47c83078fe27a42f77480f3b87139c654221b0d421556e251418b55a8dc4a1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fc66dd6d7-4s89b" podUID="16f3dad5-c3e2-404c-8ef5-6e879cea8e8b" Jun 20 19:16:20.177309 containerd[1726]: time="2025-06-20T19:16:20.176622656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc66dd6d7-qdqzw,Uid:51986139-459e-4222-b31e-78a26677e7c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db5ed000fa0ca0dbd7d3cd7360f4a8453aa8311955884e988d924ee7cea7ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.177425 kubelet[3151]: E0620 19:16:20.176823 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db5ed000fa0ca0dbd7d3cd7360f4a8453aa8311955884e988d924ee7cea7ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.177425 kubelet[3151]: E0620 19:16:20.176976 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db5ed000fa0ca0dbd7d3cd7360f4a8453aa8311955884e988d924ee7cea7ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fc66dd6d7-qdqzw" Jun 20 19:16:20.177425 kubelet[3151]: E0620 19:16:20.176998 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db5ed000fa0ca0dbd7d3cd7360f4a8453aa8311955884e988d924ee7cea7ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fc66dd6d7-qdqzw" Jun 20 19:16:20.177527 kubelet[3151]: E0620 19:16:20.177039 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fc66dd6d7-qdqzw_calico-apiserver(51986139-459e-4222-b31e-78a26677e7c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fc66dd6d7-qdqzw_calico-apiserver(51986139-459e-4222-b31e-78a26677e7c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6db5ed000fa0ca0dbd7d3cd7360f4a8453aa8311955884e988d924ee7cea7ba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fc66dd6d7-qdqzw" podUID="51986139-459e-4222-b31e-78a26677e7c0" Jun 20 19:16:20.182066 containerd[1726]: time="2025-06-20T19:16:20.181976356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4b5m5,Uid:48026b39-3c6c-4056-8581-3b693c168b53,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18826350b644a1e2d4a315a5841ad35988dc1a5173bb34b3207cbca0e73ebfd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.182287 kubelet[3151]: E0620 19:16:20.182262 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18826350b644a1e2d4a315a5841ad35988dc1a5173bb34b3207cbca0e73ebfd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.182425 kubelet[3151]: E0620 19:16:20.182403 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18826350b644a1e2d4a315a5841ad35988dc1a5173bb34b3207cbca0e73ebfd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4b5m5" Jun 20 19:16:20.182525 kubelet[3151]: E0620 19:16:20.182472 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18826350b644a1e2d4a315a5841ad35988dc1a5173bb34b3207cbca0e73ebfd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4b5m5" Jun 20 19:16:20.182714 kubelet[3151]: E0620 19:16:20.182683 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4b5m5_calico-system(48026b39-3c6c-4056-8581-3b693c168b53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4b5m5_calico-system(48026b39-3c6c-4056-8581-3b693c168b53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18826350b644a1e2d4a315a5841ad35988dc1a5173bb34b3207cbca0e73ebfd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4b5m5" podUID="48026b39-3c6c-4056-8581-3b693c168b53" Jun 20 19:16:20.193496 containerd[1726]: time="2025-06-20T19:16:20.193462300Z" level=error msg="Failed to destroy network for sandbox \"7e1a8b7862ac5626be9d09d6d713c91d5a9dd3263e39093a1a0aa15c89501d3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.197474 containerd[1726]: time="2025-06-20T19:16:20.197425199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-d49s2,Uid:bb494847-e0c1-435c-8cf2-807c46e8aca7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1a8b7862ac5626be9d09d6d713c91d5a9dd3263e39093a1a0aa15c89501d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.197915 kubelet[3151]: E0620 19:16:20.197612 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1a8b7862ac5626be9d09d6d713c91d5a9dd3263e39093a1a0aa15c89501d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.197915 kubelet[3151]: E0620 19:16:20.197669 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1a8b7862ac5626be9d09d6d713c91d5a9dd3263e39093a1a0aa15c89501d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-d49s2" Jun 20 19:16:20.197915 kubelet[3151]: E0620 19:16:20.197698 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1a8b7862ac5626be9d09d6d713c91d5a9dd3263e39093a1a0aa15c89501d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-d49s2" Jun 20 19:16:20.198032 kubelet[3151]: E0620 19:16:20.197740 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-dc7b455cb-d49s2_calico-system(bb494847-e0c1-435c-8cf2-807c46e8aca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-dc7b455cb-d49s2_calico-system(bb494847-e0c1-435c-8cf2-807c46e8aca7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e1a8b7862ac5626be9d09d6d713c91d5a9dd3263e39093a1a0aa15c89501d3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-dc7b455cb-d49s2" podUID="bb494847-e0c1-435c-8cf2-807c46e8aca7" Jun 20 19:16:20.200040 containerd[1726]: time="2025-06-20T19:16:20.199939492Z" level=error msg="Failed to destroy network for sandbox \"b776bf14e4b4eb6a0f9e8e9a4ff5c6dfb0293e05877847d104c4bb96070205c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.203062 containerd[1726]: time="2025-06-20T19:16:20.203033269Z" level=error msg="Failed to destroy network for sandbox \"a230eacf2e1b19ba97664e4a3754bff56238ac961fa9b73692f5689499dd4668\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.204921 containerd[1726]: time="2025-06-20T19:16:20.204892163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b5rj7,Uid:ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b776bf14e4b4eb6a0f9e8e9a4ff5c6dfb0293e05877847d104c4bb96070205c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.205212 kubelet[3151]: E0620 19:16:20.205181 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b776bf14e4b4eb6a0f9e8e9a4ff5c6dfb0293e05877847d104c4bb96070205c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.205265 kubelet[3151]: E0620 19:16:20.205230 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b776bf14e4b4eb6a0f9e8e9a4ff5c6dfb0293e05877847d104c4bb96070205c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b5rj7" Jun 20 19:16:20.205265 kubelet[3151]: E0620 19:16:20.205250 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b776bf14e4b4eb6a0f9e8e9a4ff5c6dfb0293e05877847d104c4bb96070205c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b5rj7" Jun 20 19:16:20.205347 kubelet[3151]: E0620 19:16:20.205294 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-b5rj7_kube-system(ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-b5rj7_kube-system(ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b776bf14e4b4eb6a0f9e8e9a4ff5c6dfb0293e05877847d104c4bb96070205c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b5rj7" podUID="ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53" Jun 20 19:16:20.209166 containerd[1726]: time="2025-06-20T19:16:20.209134649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675744cfd-8csx6,Uid:8e45c42b-c96e-4058-a6aa-3130d834d808,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a230eacf2e1b19ba97664e4a3754bff56238ac961fa9b73692f5689499dd4668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.209606 kubelet[3151]: E0620 19:16:20.209476 3151 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a230eacf2e1b19ba97664e4a3754bff56238ac961fa9b73692f5689499dd4668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:16:20.209606 kubelet[3151]: E0620 19:16:20.209519 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a230eacf2e1b19ba97664e4a3754bff56238ac961fa9b73692f5689499dd4668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6675744cfd-8csx6" Jun 20 19:16:20.209606 kubelet[3151]: E0620 19:16:20.209541 3151 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a230eacf2e1b19ba97664e4a3754bff56238ac961fa9b73692f5689499dd4668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6675744cfd-8csx6" Jun 20 19:16:20.209693 kubelet[3151]: E0620 19:16:20.209579 3151 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6675744cfd-8csx6_calico-apiserver(8e45c42b-c96e-4058-a6aa-3130d834d808)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6675744cfd-8csx6_calico-apiserver(8e45c42b-c96e-4058-a6aa-3130d834d808)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a230eacf2e1b19ba97664e4a3754bff56238ac961fa9b73692f5689499dd4668\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6675744cfd-8csx6" podUID="8e45c42b-c96e-4058-a6aa-3130d834d808" Jun 20 19:16:20.900256 systemd[1]: run-netns-cni\x2dad97aaa5\x2dc9de\x2de08d\x2d4b70\x2d5a6cff1a6653.mount: Deactivated successfully. Jun 20 19:16:20.900519 systemd[1]: run-netns-cni\x2df8714005\x2dbd1c\x2de73e\x2dd004\x2d5f40c24f84d6.mount: Deactivated successfully. Jun 20 19:16:20.900627 systemd[1]: run-netns-cni\x2d6fe950fe\x2d629f\x2d38e1\x2d378b\x2d166bc15521f6.mount: Deactivated successfully. Jun 20 19:16:20.900731 systemd[1]: run-netns-cni\x2dc9a61d4a\x2dd691\x2d0819\x2da438\x2dbbaa2f34afdc.mount: Deactivated successfully. Jun 20 19:16:20.900839 systemd[1]: run-netns-cni\x2d30c08235\x2db6a8\x2de41c\x2da6e1\x2d3babfd4d5220.mount: Deactivated successfully. Jun 20 19:16:24.322854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3579041010.mount: Deactivated successfully. Jun 20 19:16:24.348774 containerd[1726]: time="2025-06-20T19:16:24.348725070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:24.351051 containerd[1726]: time="2025-06-20T19:16:24.351016375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:16:24.353581 containerd[1726]: time="2025-06-20T19:16:24.353535037Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:24.356895 containerd[1726]: time="2025-06-20T19:16:24.356845294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:24.357257 containerd[1726]: time="2025-06-20T19:16:24.357132608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 4.465106784s" Jun 20 19:16:24.357257 containerd[1726]: time="2025-06-20T19:16:24.357164506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:16:24.373764 containerd[1726]: time="2025-06-20T19:16:24.373733012Z" level=info msg="CreateContainer within sandbox \"6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:16:24.389857 containerd[1726]: time="2025-06-20T19:16:24.389073795Z" level=info msg="Container 1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:24.409596 containerd[1726]: time="2025-06-20T19:16:24.409559676Z" level=info msg="CreateContainer within sandbox \"6fc85b19501f7980d2d07e698e682616d0f2e02897e2595139a4a8a5a0b80290\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\"" Jun 20 19:16:24.410143 containerd[1726]: time="2025-06-20T19:16:24.410121058Z" level=info msg="StartContainer for \"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\"" Jun 20 19:16:24.412518 containerd[1726]: time="2025-06-20T19:16:24.412438335Z" level=info msg="connecting to shim 1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d" address="unix:///run/containerd/s/98f90904052c37598a619e40134a52206f6473d54e880cf63f354221d4f0a7d6" protocol=ttrpc version=3 Jun 20 19:16:24.431989 systemd[1]: Started cri-containerd-1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d.scope - libcontainer container 1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d. Jun 20 19:16:24.473811 containerd[1726]: time="2025-06-20T19:16:24.473772124Z" level=info msg="StartContainer for \"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\" returns successfully" Jun 20 19:16:24.692913 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:16:24.693047 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:16:24.954774 kubelet[3151]: I0620 19:16:24.954651 3151 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76edfd16-5b35-43e9-9a66-a035afe11c46-whisker-ca-bundle\") pod \"76edfd16-5b35-43e9-9a66-a035afe11c46\" (UID: \"76edfd16-5b35-43e9-9a66-a035afe11c46\") " Jun 20 19:16:24.955540 kubelet[3151]: I0620 19:16:24.954947 3151 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76edfd16-5b35-43e9-9a66-a035afe11c46-whisker-backend-key-pair\") pod \"76edfd16-5b35-43e9-9a66-a035afe11c46\" (UID: \"76edfd16-5b35-43e9-9a66-a035afe11c46\") " Jun 20 19:16:24.955540 kubelet[3151]: I0620 19:16:24.955062 3151 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmw62\" (UniqueName: \"kubernetes.io/projected/76edfd16-5b35-43e9-9a66-a035afe11c46-kube-api-access-dmw62\") pod \"76edfd16-5b35-43e9-9a66-a035afe11c46\" (UID: \"76edfd16-5b35-43e9-9a66-a035afe11c46\") " Jun 20 19:16:24.964929 kubelet[3151]: I0620 19:16:24.963662 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tnznw" podStartSLOduration=1.566878177 podStartE2EDuration="33.96364292s" podCreationTimestamp="2025-06-20 19:15:51 +0000 UTC" firstStartedPulling="2025-06-20 19:15:51.961326318 +0000 UTC m=+19.316225258" lastFinishedPulling="2025-06-20 19:16:24.358091053 +0000 UTC m=+51.712990001" observedRunningTime="2025-06-20 19:16:24.951745032 +0000 UTC m=+52.306643978" watchObservedRunningTime="2025-06-20 19:16:24.96364292 +0000 UTC m=+52.318541867" Jun 20 19:16:24.969385 kubelet[3151]: I0620 19:16:24.969311 3151 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76edfd16-5b35-43e9-9a66-a035afe11c46-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "76edfd16-5b35-43e9-9a66-a035afe11c46" (UID: "76edfd16-5b35-43e9-9a66-a035afe11c46"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:16:24.969385 kubelet[3151]: I0620 19:16:24.969371 3151 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76edfd16-5b35-43e9-9a66-a035afe11c46-kube-api-access-dmw62" (OuterVolumeSpecName: "kube-api-access-dmw62") pod "76edfd16-5b35-43e9-9a66-a035afe11c46" (UID: "76edfd16-5b35-43e9-9a66-a035afe11c46"). InnerVolumeSpecName "kube-api-access-dmw62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:16:24.975211 kubelet[3151]: I0620 19:16:24.975149 3151 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76edfd16-5b35-43e9-9a66-a035afe11c46-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "76edfd16-5b35-43e9-9a66-a035afe11c46" (UID: "76edfd16-5b35-43e9-9a66-a035afe11c46"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:16:25.056872 kubelet[3151]: I0620 19:16:25.056242 3151 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76edfd16-5b35-43e9-9a66-a035afe11c46-whisker-ca-bundle\") on node \"ci-4344.1.0-a-657d644de8\" DevicePath \"\"" Jun 20 19:16:25.056872 kubelet[3151]: I0620 19:16:25.056280 3151 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76edfd16-5b35-43e9-9a66-a035afe11c46-whisker-backend-key-pair\") on node \"ci-4344.1.0-a-657d644de8\" DevicePath \"\"" Jun 20 19:16:25.056872 kubelet[3151]: I0620 19:16:25.056292 3151 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmw62\" (UniqueName: \"kubernetes.io/projected/76edfd16-5b35-43e9-9a66-a035afe11c46-kube-api-access-dmw62\") on node \"ci-4344.1.0-a-657d644de8\" DevicePath \"\"" Jun 20 19:16:25.102198 containerd[1726]: time="2025-06-20T19:16:25.102117549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\" id:\"a2757a0ae4a8bbd78de7c22656c000966c4370c79c37c4cbc33e9e41fee4c537\" pid:4274 exit_status:1 exited_at:{seconds:1750446985 nanos:101711258}" Jun 20 19:16:25.205203 systemd[1]: Removed slice kubepods-besteffort-pod76edfd16_5b35_43e9_9a66_a035afe11c46.slice - libcontainer container kubepods-besteffort-pod76edfd16_5b35_43e9_9a66_a035afe11c46.slice. Jun 20 19:16:25.280920 systemd[1]: Created slice kubepods-besteffort-pod6fe00a01_7057_48d8_82ab_51d02bc43551.slice - libcontainer container kubepods-besteffort-pod6fe00a01_7057_48d8_82ab_51d02bc43551.slice. Jun 20 19:16:25.322616 systemd[1]: var-lib-kubelet-pods-76edfd16\x2d5b35\x2d43e9\x2d9a66\x2da035afe11c46-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmw62.mount: Deactivated successfully. Jun 20 19:16:25.322725 systemd[1]: var-lib-kubelet-pods-76edfd16\x2d5b35\x2d43e9\x2d9a66\x2da035afe11c46-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:16:25.358033 kubelet[3151]: I0620 19:16:25.357881 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe00a01-7057-48d8-82ab-51d02bc43551-whisker-ca-bundle\") pod \"whisker-69f478c6d4-gvbm9\" (UID: \"6fe00a01-7057-48d8-82ab-51d02bc43551\") " pod="calico-system/whisker-69f478c6d4-gvbm9" Jun 20 19:16:25.358033 kubelet[3151]: I0620 19:16:25.357944 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6fe00a01-7057-48d8-82ab-51d02bc43551-whisker-backend-key-pair\") pod \"whisker-69f478c6d4-gvbm9\" (UID: \"6fe00a01-7057-48d8-82ab-51d02bc43551\") " pod="calico-system/whisker-69f478c6d4-gvbm9" Jun 20 19:16:25.358033 kubelet[3151]: I0620 19:16:25.357966 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97db\" (UniqueName: \"kubernetes.io/projected/6fe00a01-7057-48d8-82ab-51d02bc43551-kube-api-access-f97db\") pod \"whisker-69f478c6d4-gvbm9\" (UID: \"6fe00a01-7057-48d8-82ab-51d02bc43551\") " pod="calico-system/whisker-69f478c6d4-gvbm9" Jun 20 19:16:25.586268 containerd[1726]: time="2025-06-20T19:16:25.586140293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f478c6d4-gvbm9,Uid:6fe00a01-7057-48d8-82ab-51d02bc43551,Namespace:calico-system,Attempt:0,}" Jun 20 19:16:25.681778 systemd-networkd[1354]: calib166094a1c8: Link UP Jun 20 19:16:25.683680 systemd-networkd[1354]: calib166094a1c8: Gained carrier Jun 20 19:16:25.702863 containerd[1726]: 2025-06-20 19:16:25.611 [INFO][4301] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:16:25.702863 containerd[1726]: 2025-06-20 19:16:25.619 [INFO][4301] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0 whisker-69f478c6d4- calico-system 6fe00a01-7057-48d8-82ab-51d02bc43551 957 0 2025-06-20 19:16:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69f478c6d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 whisker-69f478c6d4-gvbm9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib166094a1c8 [] [] }} ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Namespace="calico-system" Pod="whisker-69f478c6d4-gvbm9" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-" Jun 20 19:16:25.702863 containerd[1726]: 2025-06-20 19:16:25.619 [INFO][4301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Namespace="calico-system" Pod="whisker-69f478c6d4-gvbm9" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" Jun 20 19:16:25.702863 containerd[1726]: 2025-06-20 19:16:25.643 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" HandleID="k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Workload="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.643 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" HandleID="k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Workload="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-657d644de8", "pod":"whisker-69f478c6d4-gvbm9", "timestamp":"2025-06-20 19:16:25.643624976 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.643 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.643 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.643 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.648 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.651 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.654 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.656 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703098 containerd[1726]: 2025-06-20 19:16:25.657 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703311 containerd[1726]: 2025-06-20 19:16:25.657 [INFO][4312] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703311 containerd[1726]: 2025-06-20 19:16:25.659 [INFO][4312] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb Jun 20 19:16:25.703311 containerd[1726]: 2025-06-20 19:16:25.665 [INFO][4312] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703311 containerd[1726]: 2025-06-20 19:16:25.672 [INFO][4312] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.1/26] block=192.168.8.0/26 handle="k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703311 containerd[1726]: 2025-06-20 19:16:25.672 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.1/26] handle="k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:25.703311 containerd[1726]: 2025-06-20 19:16:25.672 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:25.703311 containerd[1726]: 2025-06-20 19:16:25.672 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.1/26] IPv6=[] ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" HandleID="k8s-pod-network.dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Workload="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" Jun 20 19:16:25.703459 containerd[1726]: 2025-06-20 19:16:25.675 [INFO][4301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Namespace="calico-system" Pod="whisker-69f478c6d4-gvbm9" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0", GenerateName:"whisker-69f478c6d4-", Namespace:"calico-system", SelfLink:"", UID:"6fe00a01-7057-48d8-82ab-51d02bc43551", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69f478c6d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"whisker-69f478c6d4-gvbm9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib166094a1c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:25.703459 containerd[1726]: 2025-06-20 19:16:25.675 [INFO][4301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.1/32] ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Namespace="calico-system" Pod="whisker-69f478c6d4-gvbm9" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" Jun 20 19:16:25.703548 containerd[1726]: 2025-06-20 19:16:25.675 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib166094a1c8 ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Namespace="calico-system" Pod="whisker-69f478c6d4-gvbm9" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" Jun 20 19:16:25.703548 containerd[1726]: 2025-06-20 19:16:25.686 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Namespace="calico-system" Pod="whisker-69f478c6d4-gvbm9" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" Jun 20 19:16:25.703603 containerd[1726]: 2025-06-20 19:16:25.686 [INFO][4301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Namespace="calico-system" Pod="whisker-69f478c6d4-gvbm9" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0", GenerateName:"whisker-69f478c6d4-", Namespace:"calico-system", SelfLink:"", UID:"6fe00a01-7057-48d8-82ab-51d02bc43551", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69f478c6d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb", Pod:"whisker-69f478c6d4-gvbm9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib166094a1c8", MAC:"a6:80:a8:5a:c9:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:25.703671 containerd[1726]: 2025-06-20 19:16:25.701 [INFO][4301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" Namespace="calico-system" Pod="whisker-69f478c6d4-gvbm9" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-whisker--69f478c6d4--gvbm9-eth0" Jun 20 19:16:25.737664 containerd[1726]: time="2025-06-20T19:16:25.737606188Z" level=info msg="connecting to shim dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb" address="unix:///run/containerd/s/8cb6c6e524a7c5e50154cf1ff48031aee023cd8ca4f276059c472a9f6b9d65f0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:25.760016 systemd[1]: Started cri-containerd-dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb.scope - libcontainer container dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb. Jun 20 19:16:25.799344 containerd[1726]: time="2025-06-20T19:16:25.799286712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f478c6d4-gvbm9,Uid:6fe00a01-7057-48d8-82ab-51d02bc43551,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb\"" Jun 20 19:16:25.801237 containerd[1726]: time="2025-06-20T19:16:25.800982459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:16:25.964463 containerd[1726]: time="2025-06-20T19:16:25.964421410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\" id:\"b62b16eba7d080146554689dedde5d1050dcd36cd33e81c3fc93db4efc233591\" pid:4385 exit_status:1 exited_at:{seconds:1750446985 nanos:964122107}" Jun 20 19:16:26.168136 containerd[1726]: time="2025-06-20T19:16:26.168084768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\" id:\"bdeb93854b04ccfe97907ee626a6f90e7128229b683a354abaae2a9b16cb4157\" pid:4408 exit_status:1 exited_at:{seconds:1750446986 nanos:167687258}" Jun 20 19:16:26.731372 systemd-networkd[1354]: calib166094a1c8: Gained IPv6LL Jun 20 19:16:26.741150 kubelet[3151]: I0620 19:16:26.740984 3151 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76edfd16-5b35-43e9-9a66-a035afe11c46" path="/var/lib/kubelet/pods/76edfd16-5b35-43e9-9a66-a035afe11c46/volumes" Jun 20 19:16:26.791534 systemd-networkd[1354]: vxlan.calico: Link UP Jun 20 19:16:26.791917 systemd-networkd[1354]: vxlan.calico: Gained carrier Jun 20 19:16:27.201759 containerd[1726]: time="2025-06-20T19:16:27.201711661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:27.205079 containerd[1726]: time="2025-06-20T19:16:27.204983559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:16:27.208100 containerd[1726]: time="2025-06-20T19:16:27.208065294Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:27.211858 containerd[1726]: time="2025-06-20T19:16:27.211623004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:27.212206 containerd[1726]: time="2025-06-20T19:16:27.212182981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.411171907s" Jun 20 19:16:27.212250 containerd[1726]: time="2025-06-20T19:16:27.212213058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:16:27.214810 containerd[1726]: time="2025-06-20T19:16:27.214759832Z" level=info msg="CreateContainer within sandbox \"dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:16:27.232770 containerd[1726]: time="2025-06-20T19:16:27.231463052Z" level=info msg="Container c985e6c568bd3df98c04a193968c7fb135759721742e82fc5dbfe4141d563d8f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:27.251772 containerd[1726]: time="2025-06-20T19:16:27.251734820Z" level=info msg="CreateContainer within sandbox \"dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c985e6c568bd3df98c04a193968c7fb135759721742e82fc5dbfe4141d563d8f\"" Jun 20 19:16:27.253666 containerd[1726]: time="2025-06-20T19:16:27.252310249Z" level=info msg="StartContainer for \"c985e6c568bd3df98c04a193968c7fb135759721742e82fc5dbfe4141d563d8f\"" Jun 20 19:16:27.253666 containerd[1726]: time="2025-06-20T19:16:27.253629627Z" level=info msg="connecting to shim c985e6c568bd3df98c04a193968c7fb135759721742e82fc5dbfe4141d563d8f" address="unix:///run/containerd/s/8cb6c6e524a7c5e50154cf1ff48031aee023cd8ca4f276059c472a9f6b9d65f0" protocol=ttrpc version=3 Jun 20 19:16:27.273977 systemd[1]: Started cri-containerd-c985e6c568bd3df98c04a193968c7fb135759721742e82fc5dbfe4141d563d8f.scope - libcontainer container c985e6c568bd3df98c04a193968c7fb135759721742e82fc5dbfe4141d563d8f. Jun 20 19:16:27.321172 containerd[1726]: time="2025-06-20T19:16:27.321100475Z" level=info msg="StartContainer for \"c985e6c568bd3df98c04a193968c7fb135759721742e82fc5dbfe4141d563d8f\" returns successfully" Jun 20 19:16:27.324262 containerd[1726]: time="2025-06-20T19:16:27.323918200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:16:28.331001 systemd-networkd[1354]: vxlan.calico: Gained IPv6LL Jun 20 19:16:29.746569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461304559.mount: Deactivated successfully. Jun 20 19:16:29.797723 containerd[1726]: time="2025-06-20T19:16:29.797674646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:29.800068 containerd[1726]: time="2025-06-20T19:16:29.799982028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:16:29.803114 containerd[1726]: time="2025-06-20T19:16:29.803087737Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:29.806760 containerd[1726]: time="2025-06-20T19:16:29.806711858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:29.807527 containerd[1726]: time="2025-06-20T19:16:29.807161730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 2.483209036s" Jun 20 19:16:29.807527 containerd[1726]: time="2025-06-20T19:16:29.807194391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:16:29.809606 containerd[1726]: time="2025-06-20T19:16:29.809580314Z" level=info msg="CreateContainer within sandbox \"dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:16:29.827728 containerd[1726]: time="2025-06-20T19:16:29.824597967Z" level=info msg="Container 03bbe60374b46edb74dcc9bbc6e2a0b976f6ae000c3331ad3a51bb6ae8bd4521: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:29.840700 containerd[1726]: time="2025-06-20T19:16:29.840670314Z" level=info msg="CreateContainer within sandbox \"dd84bedd63d4ecfce205949487da9160b59d62ae22cfd53b1a9187bf136758bb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"03bbe60374b46edb74dcc9bbc6e2a0b976f6ae000c3331ad3a51bb6ae8bd4521\"" Jun 20 19:16:29.841357 containerd[1726]: time="2025-06-20T19:16:29.841303742Z" level=info msg="StartContainer for \"03bbe60374b46edb74dcc9bbc6e2a0b976f6ae000c3331ad3a51bb6ae8bd4521\"" Jun 20 19:16:29.842528 containerd[1726]: time="2025-06-20T19:16:29.842468630Z" level=info msg="connecting to shim 03bbe60374b46edb74dcc9bbc6e2a0b976f6ae000c3331ad3a51bb6ae8bd4521" address="unix:///run/containerd/s/8cb6c6e524a7c5e50154cf1ff48031aee023cd8ca4f276059c472a9f6b9d65f0" protocol=ttrpc version=3 Jun 20 19:16:29.866979 systemd[1]: Started cri-containerd-03bbe60374b46edb74dcc9bbc6e2a0b976f6ae000c3331ad3a51bb6ae8bd4521.scope - libcontainer container 03bbe60374b46edb74dcc9bbc6e2a0b976f6ae000c3331ad3a51bb6ae8bd4521. Jun 20 19:16:29.919124 containerd[1726]: time="2025-06-20T19:16:29.919009785Z" level=info msg="StartContainer for \"03bbe60374b46edb74dcc9bbc6e2a0b976f6ae000c3331ad3a51bb6ae8bd4521\" returns successfully" Jun 20 19:16:29.943814 kubelet[3151]: I0620 19:16:29.943745 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-69f478c6d4-gvbm9" podStartSLOduration=0.936305568 podStartE2EDuration="4.943632264s" podCreationTimestamp="2025-06-20 19:16:25 +0000 UTC" firstStartedPulling="2025-06-20 19:16:25.800592844 +0000 UTC m=+53.155491781" lastFinishedPulling="2025-06-20 19:16:29.807919532 +0000 UTC m=+57.162818477" observedRunningTime="2025-06-20 19:16:29.941806724 +0000 UTC m=+57.296705669" watchObservedRunningTime="2025-06-20 19:16:29.943632264 +0000 UTC m=+57.298531207" Jun 20 19:16:31.734570 containerd[1726]: time="2025-06-20T19:16:31.734524790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc66dd6d7-4s89b,Uid:16f3dad5-c3e2-404c-8ef5-6e879cea8e8b,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:16:31.830179 systemd-networkd[1354]: cali424a608b863: Link UP Jun 20 19:16:31.832206 systemd-networkd[1354]: cali424a608b863: Gained carrier Jun 20 19:16:31.852697 containerd[1726]: 2025-06-20 19:16:31.769 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0 calico-apiserver-7fc66dd6d7- calico-apiserver 16f3dad5-c3e2-404c-8ef5-6e879cea8e8b 890 0 2025-06-20 19:15:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fc66dd6d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 calico-apiserver-7fc66dd6d7-4s89b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali424a608b863 [] [] }} ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-4s89b" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-" Jun 20 19:16:31.852697 containerd[1726]: 2025-06-20 19:16:31.769 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-4s89b" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:16:31.852697 containerd[1726]: 2025-06-20 19:16:31.789 [INFO][4707] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.790 [INFO][4707] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-657d644de8", "pod":"calico-apiserver-7fc66dd6d7-4s89b", "timestamp":"2025-06-20 19:16:31.789942281 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.790 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.790 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.790 [INFO][4707] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.794 [INFO][4707] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.798 [INFO][4707] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.801 [INFO][4707] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.803 [INFO][4707] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.852925 containerd[1726]: 2025-06-20 19:16:31.804 [INFO][4707] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.853132 containerd[1726]: 2025-06-20 19:16:31.805 [INFO][4707] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.853132 containerd[1726]: 2025-06-20 19:16:31.806 [INFO][4707] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656 Jun 20 19:16:31.853132 containerd[1726]: 2025-06-20 19:16:31.810 [INFO][4707] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.853132 containerd[1726]: 2025-06-20 19:16:31.823 [INFO][4707] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.2/26] block=192.168.8.0/26 handle="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.853132 containerd[1726]: 2025-06-20 19:16:31.823 [INFO][4707] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.2/26] handle="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:31.853132 containerd[1726]: 2025-06-20 19:16:31.823 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:31.853132 containerd[1726]: 2025-06-20 19:16:31.823 [INFO][4707] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.2/26] IPv6=[] ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:16:31.853305 containerd[1726]: 2025-06-20 19:16:31.825 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-4s89b" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0", GenerateName:"calico-apiserver-7fc66dd6d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"16f3dad5-c3e2-404c-8ef5-6e879cea8e8b", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc66dd6d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"calico-apiserver-7fc66dd6d7-4s89b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali424a608b863", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:31.853365 containerd[1726]: 2025-06-20 19:16:31.826 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.2/32] ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-4s89b" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:16:31.853365 containerd[1726]: 2025-06-20 19:16:31.826 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali424a608b863 ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-4s89b" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:16:31.853365 containerd[1726]: 2025-06-20 19:16:31.832 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-4s89b" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:16:31.853435 containerd[1726]: 2025-06-20 19:16:31.833 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-4s89b" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0", GenerateName:"calico-apiserver-7fc66dd6d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"16f3dad5-c3e2-404c-8ef5-6e879cea8e8b", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc66dd6d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656", Pod:"calico-apiserver-7fc66dd6d7-4s89b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali424a608b863", MAC:"c6:af:7e:29:a2:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:31.854230 containerd[1726]: 2025-06-20 19:16:31.848 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-4s89b" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:16:31.897716 containerd[1726]: time="2025-06-20T19:16:31.897657617Z" level=info msg="connecting to shim 3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" address="unix:///run/containerd/s/8a7c680f36c2e67b3c79d68875fb66f30d519fcf0bca2a5d88232d040aca03de" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:31.920955 systemd[1]: Started cri-containerd-3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656.scope - libcontainer container 3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656. Jun 20 19:16:31.966604 containerd[1726]: time="2025-06-20T19:16:31.966565829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc66dd6d7-4s89b,Uid:16f3dad5-c3e2-404c-8ef5-6e879cea8e8b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\"" Jun 20 19:16:31.968241 containerd[1726]: time="2025-06-20T19:16:31.967980010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:16:32.735735 containerd[1726]: time="2025-06-20T19:16:32.735349090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4b5m5,Uid:48026b39-3c6c-4056-8581-3b693c168b53,Namespace:calico-system,Attempt:0,}" Jun 20 19:16:32.736648 containerd[1726]: time="2025-06-20T19:16:32.736097971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-d49s2,Uid:bb494847-e0c1-435c-8cf2-807c46e8aca7,Namespace:calico-system,Attempt:0,}" Jun 20 19:16:32.871063 systemd-networkd[1354]: cali797b4bbc43f: Link UP Jun 20 19:16:32.871675 systemd-networkd[1354]: cali797b4bbc43f: Gained carrier Jun 20 19:16:32.898103 containerd[1726]: 2025-06-20 19:16:32.786 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0 goldmane-dc7b455cb- calico-system bb494847-e0c1-435c-8cf2-807c46e8aca7 892 0 2025-06-20 19:15:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:dc7b455cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 goldmane-dc7b455cb-d49s2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali797b4bbc43f [] [] }} ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Namespace="calico-system" Pod="goldmane-dc7b455cb-d49s2" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-" Jun 20 19:16:32.898103 containerd[1726]: 2025-06-20 19:16:32.786 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Namespace="calico-system" Pod="goldmane-dc7b455cb-d49s2" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" Jun 20 19:16:32.898103 containerd[1726]: 2025-06-20 19:16:32.829 [INFO][4802] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" HandleID="k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Workload="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.829 [INFO][4802] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" HandleID="k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Workload="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-657d644de8", "pod":"goldmane-dc7b455cb-d49s2", "timestamp":"2025-06-20 19:16:32.829618864 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.830 [INFO][4802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.830 [INFO][4802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.830 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.837 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.841 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.844 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.846 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.898617 containerd[1726]: 2025-06-20 19:16:32.848 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.899186 containerd[1726]: 2025-06-20 19:16:32.848 [INFO][4802] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.899186 containerd[1726]: 2025-06-20 19:16:32.849 [INFO][4802] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393 Jun 20 19:16:32.899186 containerd[1726]: 2025-06-20 19:16:32.854 [INFO][4802] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.899186 containerd[1726]: 2025-06-20 19:16:32.861 [INFO][4802] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.3/26] block=192.168.8.0/26 handle="k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.899186 containerd[1726]: 2025-06-20 19:16:32.861 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.3/26] handle="k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:32.899186 containerd[1726]: 2025-06-20 19:16:32.861 [INFO][4802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:32.899186 containerd[1726]: 2025-06-20 19:16:32.861 [INFO][4802] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.3/26] IPv6=[] ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" HandleID="k8s-pod-network.b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Workload="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" Jun 20 19:16:32.899644 containerd[1726]: 2025-06-20 19:16:32.865 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Namespace="calico-system" Pod="goldmane-dc7b455cb-d49s2" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"bb494847-e0c1-435c-8cf2-807c46e8aca7", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"goldmane-dc7b455cb-d49s2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali797b4bbc43f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:32.899644 containerd[1726]: 2025-06-20 19:16:32.866 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.3/32] ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Namespace="calico-system" Pod="goldmane-dc7b455cb-d49s2" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" Jun 20 19:16:32.899935 containerd[1726]: 2025-06-20 19:16:32.867 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali797b4bbc43f ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Namespace="calico-system" Pod="goldmane-dc7b455cb-d49s2" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" Jun 20 19:16:32.899935 containerd[1726]: 2025-06-20 19:16:32.871 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Namespace="calico-system" Pod="goldmane-dc7b455cb-d49s2" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" Jun 20 19:16:32.899985 containerd[1726]: 2025-06-20 19:16:32.873 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Namespace="calico-system" Pod="goldmane-dc7b455cb-d49s2" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"bb494847-e0c1-435c-8cf2-807c46e8aca7", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393", Pod:"goldmane-dc7b455cb-d49s2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali797b4bbc43f", MAC:"d6:27:c2:01:41:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:32.900050 containerd[1726]: 2025-06-20 19:16:32.891 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" Namespace="calico-system" Pod="goldmane-dc7b455cb-d49s2" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-goldmane--dc7b455cb--d49s2-eth0" Jun 20 19:16:32.943423 containerd[1726]: time="2025-06-20T19:16:32.942937608Z" level=info msg="connecting to shim b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393" address="unix:///run/containerd/s/a3ffb226458c5c876de47efe287db352593098c712c838033b34d9803f2bdf24" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:32.968169 systemd[1]: Started cri-containerd-b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393.scope - libcontainer container b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393. Jun 20 19:16:32.982761 systemd-networkd[1354]: cali3fd1a3ba59c: Link UP Jun 20 19:16:32.984741 systemd-networkd[1354]: cali3fd1a3ba59c: Gained carrier Jun 20 19:16:33.004372 containerd[1726]: 2025-06-20 19:16:32.791 [INFO][4776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0 csi-node-driver- calico-system 48026b39-3c6c-4056-8581-3b693c168b53 736 0 2025-06-20 19:15:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:896496fb5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 csi-node-driver-4b5m5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3fd1a3ba59c [] [] }} ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Namespace="calico-system" Pod="csi-node-driver-4b5m5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-" Jun 20 19:16:33.004372 containerd[1726]: 2025-06-20 19:16:32.793 [INFO][4776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Namespace="calico-system" Pod="csi-node-driver-4b5m5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" Jun 20 19:16:33.004372 containerd[1726]: 2025-06-20 19:16:32.835 [INFO][4807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" HandleID="k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Workload="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.836 [INFO][4807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" HandleID="k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Workload="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003320c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-657d644de8", "pod":"csi-node-driver-4b5m5", "timestamp":"2025-06-20 19:16:32.835911005 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.836 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.861 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.862 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.938 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.943 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.948 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.950 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.004878 containerd[1726]: 2025-06-20 19:16:32.958 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.005142 containerd[1726]: 2025-06-20 19:16:32.958 [INFO][4807] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.005142 containerd[1726]: 2025-06-20 19:16:32.961 [INFO][4807] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89 Jun 20 19:16:33.005142 containerd[1726]: 2025-06-20 19:16:32.968 [INFO][4807] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.005142 containerd[1726]: 2025-06-20 19:16:32.978 [INFO][4807] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.4/26] block=192.168.8.0/26 handle="k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.005142 containerd[1726]: 2025-06-20 19:16:32.978 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.4/26] handle="k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.005142 containerd[1726]: 2025-06-20 19:16:32.978 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:33.005142 containerd[1726]: 2025-06-20 19:16:32.978 [INFO][4807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.4/26] IPv6=[] ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" HandleID="k8s-pod-network.d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Workload="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" Jun 20 19:16:33.005711 containerd[1726]: 2025-06-20 19:16:32.980 [INFO][4776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Namespace="calico-system" Pod="csi-node-driver-4b5m5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"48026b39-3c6c-4056-8581-3b693c168b53", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"csi-node-driver-4b5m5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3fd1a3ba59c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:33.005791 containerd[1726]: 2025-06-20 19:16:32.980 [INFO][4776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.4/32] ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Namespace="calico-system" Pod="csi-node-driver-4b5m5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" Jun 20 19:16:33.005791 containerd[1726]: 2025-06-20 19:16:32.980 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fd1a3ba59c ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Namespace="calico-system" Pod="csi-node-driver-4b5m5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" Jun 20 19:16:33.005791 containerd[1726]: 2025-06-20 19:16:32.982 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Namespace="calico-system" Pod="csi-node-driver-4b5m5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" Jun 20 19:16:33.006338 containerd[1726]: 2025-06-20 19:16:32.982 [INFO][4776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Namespace="calico-system" Pod="csi-node-driver-4b5m5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"48026b39-3c6c-4056-8581-3b693c168b53", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89", Pod:"csi-node-driver-4b5m5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3fd1a3ba59c", MAC:"6a:c8:dd:3b:42:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:33.006413 containerd[1726]: 2025-06-20 19:16:32.998 [INFO][4776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" Namespace="calico-system" Pod="csi-node-driver-4b5m5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-csi--node--driver--4b5m5-eth0" Jun 20 19:16:33.064810 containerd[1726]: time="2025-06-20T19:16:33.064693307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-d49s2,Uid:bb494847-e0c1-435c-8cf2-807c46e8aca7,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393\"" Jun 20 19:16:33.070741 containerd[1726]: time="2025-06-20T19:16:33.070621210Z" level=info msg="connecting to shim d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89" address="unix:///run/containerd/s/b57adc0681725d0113ffa69aad8b95f5a0c91a36cd64c7cabd897af17fa1244d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:33.102776 systemd[1]: Started cri-containerd-d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89.scope - libcontainer container d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89. Jun 20 19:16:33.143581 containerd[1726]: time="2025-06-20T19:16:33.143239951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4b5m5,Uid:48026b39-3c6c-4056-8581-3b693c168b53,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89\"" Jun 20 19:16:33.643501 systemd-networkd[1354]: cali424a608b863: Gained IPv6LL Jun 20 19:16:33.735393 containerd[1726]: time="2025-06-20T19:16:33.735347116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675744cfd-8csx6,Uid:8e45c42b-c96e-4058-a6aa-3130d834d808,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:16:33.873156 systemd-networkd[1354]: cali819593f8cb0: Link UP Jun 20 19:16:33.873454 systemd-networkd[1354]: cali819593f8cb0: Gained carrier Jun 20 19:16:33.893745 containerd[1726]: 2025-06-20 19:16:33.782 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0 calico-apiserver-6675744cfd- calico-apiserver 8e45c42b-c96e-4058-a6aa-3130d834d808 891 0 2025-06-20 19:15:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6675744cfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 calico-apiserver-6675744cfd-8csx6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali819593f8cb0 [] [] }} ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-8csx6" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-" Jun 20 19:16:33.893745 containerd[1726]: 2025-06-20 19:16:33.782 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-8csx6" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" Jun 20 19:16:33.893745 containerd[1726]: 2025-06-20 19:16:33.815 [INFO][4944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" HandleID="k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.816 [INFO][4944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" HandleID="k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-657d644de8", "pod":"calico-apiserver-6675744cfd-8csx6", "timestamp":"2025-06-20 19:16:33.815974137 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.816 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.816 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.816 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.823 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.829 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.836 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.838 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894255 containerd[1726]: 2025-06-20 19:16:33.840 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894486 containerd[1726]: 2025-06-20 19:16:33.841 [INFO][4944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894486 containerd[1726]: 2025-06-20 19:16:33.843 [INFO][4944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108 Jun 20 19:16:33.894486 containerd[1726]: 2025-06-20 19:16:33.848 [INFO][4944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894486 containerd[1726]: 2025-06-20 19:16:33.861 [INFO][4944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.5/26] block=192.168.8.0/26 handle="k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894486 containerd[1726]: 2025-06-20 19:16:33.861 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.5/26] handle="k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:33.894486 containerd[1726]: 2025-06-20 19:16:33.861 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:33.894486 containerd[1726]: 2025-06-20 19:16:33.861 [INFO][4944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.5/26] IPv6=[] ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" HandleID="k8s-pod-network.646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" Jun 20 19:16:33.894648 containerd[1726]: 2025-06-20 19:16:33.865 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-8csx6" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0", GenerateName:"calico-apiserver-6675744cfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e45c42b-c96e-4058-a6aa-3130d834d808", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675744cfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"calico-apiserver-6675744cfd-8csx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali819593f8cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:33.894708 containerd[1726]: 2025-06-20 19:16:33.866 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.5/32] ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-8csx6" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" Jun 20 19:16:33.894708 containerd[1726]: 2025-06-20 19:16:33.866 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali819593f8cb0 ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-8csx6" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" Jun 20 19:16:33.894708 containerd[1726]: 2025-06-20 19:16:33.872 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-8csx6" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" Jun 20 19:16:33.894782 containerd[1726]: 2025-06-20 19:16:33.872 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-8csx6" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0", GenerateName:"calico-apiserver-6675744cfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e45c42b-c96e-4058-a6aa-3130d834d808", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675744cfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108", Pod:"calico-apiserver-6675744cfd-8csx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali819593f8cb0", MAC:"5e:8f:38:7a:77:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:33.894848 containerd[1726]: 2025-06-20 19:16:33.888 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-8csx6" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--8csx6-eth0" Jun 20 19:16:33.955093 containerd[1726]: time="2025-06-20T19:16:33.955015286Z" level=info msg="connecting to shim 646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108" address="unix:///run/containerd/s/c789fea3ff3957b0fc96e5c8b60194b618aa968eccdd97e58208572c2038614e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:33.989169 systemd[1]: Started cri-containerd-646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108.scope - libcontainer container 646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108. Jun 20 19:16:34.076458 containerd[1726]: time="2025-06-20T19:16:34.076401472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675744cfd-8csx6,Uid:8e45c42b-c96e-4058-a6aa-3130d834d808,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108\"" Jun 20 19:16:34.194010 containerd[1726]: time="2025-06-20T19:16:34.193956048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:34.199017 containerd[1726]: time="2025-06-20T19:16:34.198969444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:16:34.209333 containerd[1726]: time="2025-06-20T19:16:34.208998922Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:34.214106 containerd[1726]: time="2025-06-20T19:16:34.213966508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:34.215233 containerd[1726]: time="2025-06-20T19:16:34.215195652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 2.247174651s" Jun 20 19:16:34.215446 containerd[1726]: time="2025-06-20T19:16:34.215430351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:16:34.218536 containerd[1726]: time="2025-06-20T19:16:34.218098244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:16:34.219455 containerd[1726]: time="2025-06-20T19:16:34.219427190Z" level=info msg="CreateContainer within sandbox \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:16:34.241042 containerd[1726]: time="2025-06-20T19:16:34.241004089Z" level=info msg="Container 58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:34.257175 containerd[1726]: time="2025-06-20T19:16:34.257141972Z" level=info msg="CreateContainer within sandbox \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\"" Jun 20 19:16:34.257640 containerd[1726]: time="2025-06-20T19:16:34.257614389Z" level=info msg="StartContainer for \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\"" Jun 20 19:16:34.258955 containerd[1726]: time="2025-06-20T19:16:34.258886650Z" level=info msg="connecting to shim 58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1" address="unix:///run/containerd/s/8a7c680f36c2e67b3c79d68875fb66f30d519fcf0bca2a5d88232d040aca03de" protocol=ttrpc version=3 Jun 20 19:16:34.276980 systemd[1]: Started cri-containerd-58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1.scope - libcontainer container 58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1. Jun 20 19:16:34.322235 containerd[1726]: time="2025-06-20T19:16:34.322112247Z" level=info msg="StartContainer for \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" returns successfully" Jun 20 19:16:34.538986 systemd-networkd[1354]: cali797b4bbc43f: Gained IPv6LL Jun 20 19:16:34.736321 containerd[1726]: time="2025-06-20T19:16:34.736272712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcd5cccd8-wjrwq,Uid:26571497-5c9d-4e20-967a-26a95bf40e1f,Namespace:calico-system,Attempt:0,}" Jun 20 19:16:34.736599 containerd[1726]: time="2025-06-20T19:16:34.736578598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b5rj7,Uid:ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:34.904588 systemd-networkd[1354]: calib9292337a49: Link UP Jun 20 19:16:34.906100 systemd-networkd[1354]: calib9292337a49: Gained carrier Jun 20 19:16:34.924018 systemd-networkd[1354]: cali3fd1a3ba59c: Gained IPv6LL Jun 20 19:16:34.928278 containerd[1726]: 2025-06-20 19:16:34.809 [INFO][5044] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0 calico-kube-controllers-7fcd5cccd8- calico-system 26571497-5c9d-4e20-967a-26a95bf40e1f 888 0 2025-06-20 19:15:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fcd5cccd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 calico-kube-controllers-7fcd5cccd8-wjrwq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib9292337a49 [] [] }} ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5cccd8-wjrwq" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-" Jun 20 19:16:34.928278 containerd[1726]: 2025-06-20 19:16:34.809 [INFO][5044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5cccd8-wjrwq" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" Jun 20 19:16:34.928278 containerd[1726]: 2025-06-20 19:16:34.855 [INFO][5069] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" HandleID="k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.856 [INFO][5069] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" HandleID="k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-657d644de8", "pod":"calico-kube-controllers-7fcd5cccd8-wjrwq", "timestamp":"2025-06-20 19:16:34.854633494 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.856 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.856 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.856 [INFO][5069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.863 [INFO][5069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.868 [INFO][5069] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.872 [INFO][5069] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.878 [INFO][5069] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.928682 containerd[1726]: 2025-06-20 19:16:34.880 [INFO][5069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.929720 containerd[1726]: 2025-06-20 19:16:34.880 [INFO][5069] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.929720 containerd[1726]: 2025-06-20 19:16:34.881 [INFO][5069] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728 Jun 20 19:16:34.929720 containerd[1726]: 2025-06-20 19:16:34.885 [INFO][5069] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.929720 containerd[1726]: 2025-06-20 19:16:34.895 [INFO][5069] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.6/26] block=192.168.8.0/26 handle="k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.929720 containerd[1726]: 2025-06-20 19:16:34.895 [INFO][5069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.6/26] handle="k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:34.929720 containerd[1726]: 2025-06-20 19:16:34.895 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:34.929720 containerd[1726]: 2025-06-20 19:16:34.896 [INFO][5069] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.6/26] IPv6=[] ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" HandleID="k8s-pod-network.7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" Jun 20 19:16:34.929901 containerd[1726]: 2025-06-20 19:16:34.898 [INFO][5044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5cccd8-wjrwq" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0", GenerateName:"calico-kube-controllers-7fcd5cccd8-", Namespace:"calico-system", SelfLink:"", UID:"26571497-5c9d-4e20-967a-26a95bf40e1f", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcd5cccd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"calico-kube-controllers-7fcd5cccd8-wjrwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9292337a49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:34.929976 containerd[1726]: 2025-06-20 19:16:34.899 [INFO][5044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.6/32] ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5cccd8-wjrwq" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" Jun 20 19:16:34.929976 containerd[1726]: 2025-06-20 19:16:34.899 [INFO][5044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9292337a49 ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5cccd8-wjrwq" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" Jun 20 19:16:34.929976 containerd[1726]: 2025-06-20 19:16:34.906 [INFO][5044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5cccd8-wjrwq" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" Jun 20 19:16:34.930052 containerd[1726]: 2025-06-20 19:16:34.907 [INFO][5044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5cccd8-wjrwq" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0", GenerateName:"calico-kube-controllers-7fcd5cccd8-", Namespace:"calico-system", SelfLink:"", UID:"26571497-5c9d-4e20-967a-26a95bf40e1f", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcd5cccd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728", Pod:"calico-kube-controllers-7fcd5cccd8-wjrwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9292337a49", MAC:"46:01:d5:fe:e5:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:34.930111 containerd[1726]: 2025-06-20 19:16:34.926 [INFO][5044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5cccd8-wjrwq" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--kube--controllers--7fcd5cccd8--wjrwq-eth0" Jun 20 19:16:34.969480 kubelet[3151]: I0620 19:16:34.969412 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fc66dd6d7-4s89b" podStartSLOduration=45.719844928 podStartE2EDuration="47.969380467s" podCreationTimestamp="2025-06-20 19:15:47 +0000 UTC" firstStartedPulling="2025-06-20 19:16:31.967706096 +0000 UTC m=+59.322605029" lastFinishedPulling="2025-06-20 19:16:34.217241637 +0000 UTC m=+61.572140568" observedRunningTime="2025-06-20 19:16:34.968954189 +0000 UTC m=+62.323853160" watchObservedRunningTime="2025-06-20 19:16:34.969380467 +0000 UTC m=+62.324279412" Jun 20 19:16:34.988646 systemd-networkd[1354]: cali819593f8cb0: Gained IPv6LL Jun 20 19:16:34.992903 containerd[1726]: time="2025-06-20T19:16:34.992859528Z" level=info msg="connecting to shim 7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728" address="unix:///run/containerd/s/b95ffbd6a4eee984bfb918341a202e48e6e4f0108e2444eb9312ce182e36cadc" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:35.029017 systemd[1]: Started cri-containerd-7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728.scope - libcontainer container 7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728. Jun 20 19:16:35.042813 systemd-networkd[1354]: cali76388997587: Link UP Jun 20 19:16:35.043142 systemd-networkd[1354]: cali76388997587: Gained carrier Jun 20 19:16:35.072083 containerd[1726]: 2025-06-20 19:16:34.810 [INFO][5054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0 coredns-7c65d6cfc9- kube-system ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53 880 0 2025-06-20 19:15:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 coredns-7c65d6cfc9-b5rj7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali76388997587 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5rj7" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-" Jun 20 19:16:35.072083 containerd[1726]: 2025-06-20 19:16:34.811 [INFO][5054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5rj7" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" Jun 20 19:16:35.072083 containerd[1726]: 2025-06-20 19:16:34.857 [INFO][5071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" HandleID="k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Workload="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:34.857 [INFO][5071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" HandleID="k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Workload="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-657d644de8", "pod":"coredns-7c65d6cfc9-b5rj7", "timestamp":"2025-06-20 19:16:34.857446242 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:34.858 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:34.896 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:34.896 [INFO][5071] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:34.963 [INFO][5071] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:34.985 [INFO][5071] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:35.002 [INFO][5071] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:35.008 [INFO][5071] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073047 containerd[1726]: 2025-06-20 19:16:35.015 [INFO][5071] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073311 containerd[1726]: 2025-06-20 19:16:35.015 [INFO][5071] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073311 containerd[1726]: 2025-06-20 19:16:35.017 [INFO][5071] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3 Jun 20 19:16:35.073311 containerd[1726]: 2025-06-20 19:16:35.027 [INFO][5071] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073311 containerd[1726]: 2025-06-20 19:16:35.036 [INFO][5071] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.7/26] block=192.168.8.0/26 handle="k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073311 containerd[1726]: 2025-06-20 19:16:35.036 [INFO][5071] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.7/26] handle="k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.073311 containerd[1726]: 2025-06-20 19:16:35.037 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:35.073311 containerd[1726]: 2025-06-20 19:16:35.037 [INFO][5071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.7/26] IPv6=[] ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" HandleID="k8s-pod-network.c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Workload="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" Jun 20 19:16:35.073493 containerd[1726]: 2025-06-20 19:16:35.039 [INFO][5054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5rj7" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"coredns-7c65d6cfc9-b5rj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76388997587", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:35.073493 containerd[1726]: 2025-06-20 19:16:35.039 [INFO][5054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.7/32] ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5rj7" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" Jun 20 19:16:35.073493 containerd[1726]: 2025-06-20 19:16:35.039 [INFO][5054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76388997587 ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5rj7" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" Jun 20 19:16:35.073493 containerd[1726]: 2025-06-20 19:16:35.042 [INFO][5054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5rj7" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" Jun 20 19:16:35.073493 containerd[1726]: 2025-06-20 19:16:35.043 [INFO][5054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5rj7" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3", Pod:"coredns-7c65d6cfc9-b5rj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76388997587", MAC:"5e:87:1a:f4:9c:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:35.073493 containerd[1726]: 2025-06-20 19:16:35.069 [INFO][5054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5rj7" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--b5rj7-eth0" Jun 20 19:16:35.124938 containerd[1726]: time="2025-06-20T19:16:35.124884964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcd5cccd8-wjrwq,Uid:26571497-5c9d-4e20-967a-26a95bf40e1f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728\"" Jun 20 19:16:35.167195 containerd[1726]: time="2025-06-20T19:16:35.165956196Z" level=info msg="connecting to shim c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3" address="unix:///run/containerd/s/a8a00bc5e666503ab702dddaa1b7d62bbf06cb8b909d787f2aa7593173d30046" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:35.193040 systemd[1]: Started cri-containerd-c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3.scope - libcontainer container c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3. Jun 20 19:16:35.247637 containerd[1726]: time="2025-06-20T19:16:35.247494955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b5rj7,Uid:ea2fa56d-4ea5-4fae-a52f-930d8c9fbc53,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3\"" Jun 20 19:16:35.251754 containerd[1726]: time="2025-06-20T19:16:35.251716346Z" level=info msg="CreateContainer within sandbox \"c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:16:35.273806 containerd[1726]: time="2025-06-20T19:16:35.273752689Z" level=info msg="Container d7f6c02e2ecba33f92b233021364c431667b9c8a91c78db0c65832e498148cc3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:35.297549 containerd[1726]: time="2025-06-20T19:16:35.297277950Z" level=info msg="CreateContainer within sandbox \"c1aba247d47af410a06d83fcf161c7ab04bd1caaef06e68db595ab1801828ec3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7f6c02e2ecba33f92b233021364c431667b9c8a91c78db0c65832e498148cc3\"" Jun 20 19:16:35.299765 containerd[1726]: time="2025-06-20T19:16:35.298976980Z" level=info msg="StartContainer for \"d7f6c02e2ecba33f92b233021364c431667b9c8a91c78db0c65832e498148cc3\"" Jun 20 19:16:35.301521 containerd[1726]: time="2025-06-20T19:16:35.301492600Z" level=info msg="connecting to shim d7f6c02e2ecba33f92b233021364c431667b9c8a91c78db0c65832e498148cc3" address="unix:///run/containerd/s/a8a00bc5e666503ab702dddaa1b7d62bbf06cb8b909d787f2aa7593173d30046" protocol=ttrpc version=3 Jun 20 19:16:35.352553 systemd[1]: Started cri-containerd-d7f6c02e2ecba33f92b233021364c431667b9c8a91c78db0c65832e498148cc3.scope - libcontainer container d7f6c02e2ecba33f92b233021364c431667b9c8a91c78db0c65832e498148cc3. Jun 20 19:16:35.758707 containerd[1726]: time="2025-06-20T19:16:35.735239454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzkgt,Uid:b2972c76-0691-4f97-ac20-8d5147d7f50a,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:35.758707 containerd[1726]: time="2025-06-20T19:16:35.735244157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc66dd6d7-qdqzw,Uid:51986139-459e-4222-b31e-78a26677e7c0,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:16:35.764903 containerd[1726]: time="2025-06-20T19:16:35.764822514Z" level=info msg="StartContainer for \"d7f6c02e2ecba33f92b233021364c431667b9c8a91c78db0c65832e498148cc3\" returns successfully" Jun 20 19:16:35.957613 systemd-networkd[1354]: calia2d56ae4359: Link UP Jun 20 19:16:35.958756 systemd-networkd[1354]: calia2d56ae4359: Gained carrier Jun 20 19:16:35.990946 kubelet[3151]: I0620 19:16:35.990885 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-b5rj7" podStartSLOduration=58.990862195 podStartE2EDuration="58.990862195s" podCreationTimestamp="2025-06-20 19:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:35.989552003 +0000 UTC m=+63.344450948" watchObservedRunningTime="2025-06-20 19:16:35.990862195 +0000 UTC m=+63.345761140" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.854 [INFO][5242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0 coredns-7c65d6cfc9- kube-system b2972c76-0691-4f97-ac20-8d5147d7f50a 886 0 2025-06-20 19:15:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 coredns-7c65d6cfc9-pzkgt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2d56ae4359 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkgt" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.855 [INFO][5242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkgt" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.903 [INFO][5262] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" HandleID="k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Workload="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.903 [INFO][5262] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" HandleID="k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Workload="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-657d644de8", "pod":"coredns-7c65d6cfc9-pzkgt", "timestamp":"2025-06-20 19:16:35.903519594 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.903 [INFO][5262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.903 [INFO][5262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.903 [INFO][5262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.912 [INFO][5262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.916 [INFO][5262] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.921 [INFO][5262] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.923 [INFO][5262] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.926 [INFO][5262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.926 [INFO][5262] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.928 [INFO][5262] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.934 [INFO][5262] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.945 [INFO][5262] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.8/26] block=192.168.8.0/26 handle="k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.945 [INFO][5262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.8/26] handle="k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.945 [INFO][5262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:35.993537 containerd[1726]: 2025-06-20 19:16:35.945 [INFO][5262] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.8/26] IPv6=[] ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" HandleID="k8s-pod-network.b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Workload="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" Jun 20 19:16:35.994740 containerd[1726]: 2025-06-20 19:16:35.948 [INFO][5242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkgt" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b2972c76-0691-4f97-ac20-8d5147d7f50a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"coredns-7c65d6cfc9-pzkgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2d56ae4359", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:35.994740 containerd[1726]: 2025-06-20 19:16:35.948 [INFO][5242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.8/32] ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkgt" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" Jun 20 19:16:35.994740 containerd[1726]: 2025-06-20 19:16:35.950 [INFO][5242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2d56ae4359 ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkgt" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" Jun 20 19:16:35.994740 containerd[1726]: 2025-06-20 19:16:35.959 [INFO][5242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkgt" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" Jun 20 19:16:35.994740 containerd[1726]: 2025-06-20 19:16:35.962 [INFO][5242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkgt" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b2972c76-0691-4f97-ac20-8d5147d7f50a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d", Pod:"coredns-7c65d6cfc9-pzkgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2d56ae4359", MAC:"5e:53:0c:cc:06:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:35.994740 containerd[1726]: 2025-06-20 19:16:35.987 [INFO][5242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pzkgt" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-coredns--7c65d6cfc9--pzkgt-eth0" Jun 20 19:16:36.053673 containerd[1726]: time="2025-06-20T19:16:36.053518385Z" level=info msg="connecting to shim b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d" address="unix:///run/containerd/s/9f5fa07664a80ac70207db75674e0d7cc4ee46e82c2661d13e1e2d8185cd3f1d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:36.113016 systemd[1]: Started cri-containerd-b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d.scope - libcontainer container b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d. Jun 20 19:16:36.120143 systemd-networkd[1354]: cali4d3b5e7df91: Link UP Jun 20 19:16:36.125654 systemd-networkd[1354]: cali4d3b5e7df91: Gained carrier Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:35.851 [INFO][5233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0 calico-apiserver-7fc66dd6d7- calico-apiserver 51986139-459e-4222-b31e-78a26677e7c0 889 0 2025-06-20 19:15:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fc66dd6d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 calico-apiserver-7fc66dd6d7-qdqzw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4d3b5e7df91 [] [] }} ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-qdqzw" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:35.851 [INFO][5233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-qdqzw" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:35.905 [INFO][5260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:35.905 [INFO][5260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-657d644de8", "pod":"calico-apiserver-7fc66dd6d7-qdqzw", "timestamp":"2025-06-20 19:16:35.905512829 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:35.905 [INFO][5260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:35.945 [INFO][5260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:35.946 [INFO][5260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.018 [INFO][5260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.041 [INFO][5260] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.056 [INFO][5260] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.066 [INFO][5260] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.071 [INFO][5260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.072 [INFO][5260] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.077 [INFO][5260] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.084 [INFO][5260] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.098 [INFO][5260] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.9/26] block=192.168.8.0/26 handle="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.100 [INFO][5260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.9/26] handle="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" host="ci-4344.1.0-a-657d644de8" Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.100 [INFO][5260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:16:36.156127 containerd[1726]: 2025-06-20 19:16:36.100 [INFO][5260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.9/26] IPv6=[] ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:16:36.156928 containerd[1726]: 2025-06-20 19:16:36.107 [INFO][5233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-qdqzw" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0", GenerateName:"calico-apiserver-7fc66dd6d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"51986139-459e-4222-b31e-78a26677e7c0", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc66dd6d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"calico-apiserver-7fc66dd6d7-qdqzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d3b5e7df91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:36.156928 containerd[1726]: 2025-06-20 19:16:36.109 [INFO][5233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.9/32] ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-qdqzw" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:16:36.156928 containerd[1726]: 2025-06-20 19:16:36.110 [INFO][5233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d3b5e7df91 ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-qdqzw" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:16:36.156928 containerd[1726]: 2025-06-20 19:16:36.126 [INFO][5233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-qdqzw" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:16:36.156928 containerd[1726]: 2025-06-20 19:16:36.129 [INFO][5233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-qdqzw" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0", GenerateName:"calico-apiserver-7fc66dd6d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"51986139-459e-4222-b31e-78a26677e7c0", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc66dd6d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb", Pod:"calico-apiserver-7fc66dd6d7-qdqzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d3b5e7df91", MAC:"96:22:37:6a:69:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:16:36.156928 containerd[1726]: 2025-06-20 19:16:36.151 [INFO][5233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Namespace="calico-apiserver" Pod="calico-apiserver-7fc66dd6d7-qdqzw" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:16:36.213622 containerd[1726]: time="2025-06-20T19:16:36.213231435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzkgt,Uid:b2972c76-0691-4f97-ac20-8d5147d7f50a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d\"" Jun 20 19:16:36.222377 containerd[1726]: time="2025-06-20T19:16:36.222124122Z" level=info msg="CreateContainer within sandbox \"b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:16:36.224782 containerd[1726]: time="2025-06-20T19:16:36.224734781Z" level=info msg="connecting to shim 9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" address="unix:///run/containerd/s/32dd0aadb1127e1af3d4f7dddc0c1900e28eb4e0cabcb55925322be9743ec7b8" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:36.258192 containerd[1726]: time="2025-06-20T19:16:36.257970129Z" level=info msg="Container f9137132175a17f4dbe06fc11286cf0928eb8bdae6ed261a25c608a7e4a8eb6d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:36.276339 systemd[1]: Started cri-containerd-9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb.scope - libcontainer container 9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb. Jun 20 19:16:36.280676 containerd[1726]: time="2025-06-20T19:16:36.280301383Z" level=info msg="CreateContainer within sandbox \"b1cd96087601bcc57fd8864aa7d7f19f9cbc9b066dc51742210a4ad1503af62d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9137132175a17f4dbe06fc11286cf0928eb8bdae6ed261a25c608a7e4a8eb6d\"" Jun 20 19:16:36.281845 containerd[1726]: time="2025-06-20T19:16:36.281562485Z" level=info msg="StartContainer for \"f9137132175a17f4dbe06fc11286cf0928eb8bdae6ed261a25c608a7e4a8eb6d\"" Jun 20 19:16:36.289678 containerd[1726]: time="2025-06-20T19:16:36.289648267Z" level=info msg="connecting to shim f9137132175a17f4dbe06fc11286cf0928eb8bdae6ed261a25c608a7e4a8eb6d" address="unix:///run/containerd/s/9f5fa07664a80ac70207db75674e0d7cc4ee46e82c2661d13e1e2d8185cd3f1d" protocol=ttrpc version=3 Jun 20 19:16:36.336075 systemd[1]: Started cri-containerd-f9137132175a17f4dbe06fc11286cf0928eb8bdae6ed261a25c608a7e4a8eb6d.scope - libcontainer container f9137132175a17f4dbe06fc11286cf0928eb8bdae6ed261a25c608a7e4a8eb6d. Jun 20 19:16:36.399508 containerd[1726]: time="2025-06-20T19:16:36.399457281Z" level=info msg="StartContainer for \"f9137132175a17f4dbe06fc11286cf0928eb8bdae6ed261a25c608a7e4a8eb6d\" returns successfully" Jun 20 19:16:36.404950 containerd[1726]: time="2025-06-20T19:16:36.404912119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc66dd6d7-qdqzw,Uid:51986139-459e-4222-b31e-78a26677e7c0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\"" Jun 20 19:16:36.410990 containerd[1726]: time="2025-06-20T19:16:36.410953377Z" level=info msg="CreateContainer within sandbox \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:16:36.430928 containerd[1726]: time="2025-06-20T19:16:36.430580017Z" level=info msg="Container 41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:36.450860 containerd[1726]: time="2025-06-20T19:16:36.450684986Z" level=info msg="CreateContainer within sandbox \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\"" Jun 20 19:16:36.451743 containerd[1726]: time="2025-06-20T19:16:36.451656739Z" level=info msg="StartContainer for \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\"" Jun 20 19:16:36.454079 containerd[1726]: time="2025-06-20T19:16:36.454049828Z" level=info msg="connecting to shim 41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44" address="unix:///run/containerd/s/32dd0aadb1127e1af3d4f7dddc0c1900e28eb4e0cabcb55925322be9743ec7b8" protocol=ttrpc version=3 Jun 20 19:16:36.483001 systemd[1]: Started cri-containerd-41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44.scope - libcontainer container 41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44. Jun 20 19:16:36.523067 systemd-networkd[1354]: calib9292337a49: Gained IPv6LL Jun 20 19:16:36.558283 containerd[1726]: time="2025-06-20T19:16:36.558245339Z" level=info msg="StartContainer for \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" returns successfully" Jun 20 19:16:36.791963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038393639.mount: Deactivated successfully. Jun 20 19:16:36.843053 systemd-networkd[1354]: cali76388997587: Gained IPv6LL Jun 20 19:16:37.003189 kubelet[3151]: I0620 19:16:37.002460 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fc66dd6d7-qdqzw" podStartSLOduration=50.002440394 podStartE2EDuration="50.002440394s" podCreationTimestamp="2025-06-20 19:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:37.001523458 +0000 UTC m=+64.356422405" watchObservedRunningTime="2025-06-20 19:16:37.002440394 +0000 UTC m=+64.357339342" Jun 20 19:16:37.036018 systemd-networkd[1354]: calia2d56ae4359: Gained IPv6LL Jun 20 19:16:37.163050 systemd-networkd[1354]: cali4d3b5e7df91: Gained IPv6LL Jun 20 19:16:37.661092 containerd[1726]: time="2025-06-20T19:16:37.660463392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:37.662769 containerd[1726]: time="2025-06-20T19:16:37.662740580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:16:37.665863 containerd[1726]: time="2025-06-20T19:16:37.665798509Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:37.671487 containerd[1726]: time="2025-06-20T19:16:37.671213192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:37.672315 containerd[1726]: time="2025-06-20T19:16:37.672288515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 3.45407168s" Jun 20 19:16:37.672390 containerd[1726]: time="2025-06-20T19:16:37.672318880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:16:37.673238 containerd[1726]: time="2025-06-20T19:16:37.673216012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:16:37.675008 containerd[1726]: time="2025-06-20T19:16:37.674907077Z" level=info msg="CreateContainer within sandbox \"b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:16:37.691408 containerd[1726]: time="2025-06-20T19:16:37.691382261Z" level=info msg="Container c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:37.708294 containerd[1726]: time="2025-06-20T19:16:37.708264241Z" level=info msg="CreateContainer within sandbox \"b2e6823f3c7014a9c7ffd56ebdce07b4ab0e26eb5cbcab83f1fc09a7d2151393\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\"" Jun 20 19:16:37.709093 containerd[1726]: time="2025-06-20T19:16:37.708868937Z" level=info msg="StartContainer for \"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\"" Jun 20 19:16:37.710431 containerd[1726]: time="2025-06-20T19:16:37.710396808Z" level=info msg="connecting to shim c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af" address="unix:///run/containerd/s/a3ffb226458c5c876de47efe287db352593098c712c838033b34d9803f2bdf24" protocol=ttrpc version=3 Jun 20 19:16:37.731974 systemd[1]: Started cri-containerd-c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af.scope - libcontainer container c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af. Jun 20 19:16:37.782350 containerd[1726]: time="2025-06-20T19:16:37.782236897Z" level=info msg="StartContainer for \"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" returns successfully" Jun 20 19:16:37.879105 kubelet[3151]: I0620 19:16:37.879011 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pzkgt" podStartSLOduration=60.878970071 podStartE2EDuration="1m0.878970071s" podCreationTimestamp="2025-06-20 19:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:37.034168647 +0000 UTC m=+64.389067592" watchObservedRunningTime="2025-06-20 19:16:37.878970071 +0000 UTC m=+65.233869015" Jun 20 19:16:38.001824 kubelet[3151]: I0620 19:16:38.001588 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-dc7b455cb-d49s2" podStartSLOduration=42.395170078 podStartE2EDuration="47.001566517s" podCreationTimestamp="2025-06-20 19:15:51 +0000 UTC" firstStartedPulling="2025-06-20 19:16:33.066686006 +0000 UTC m=+60.421584938" lastFinishedPulling="2025-06-20 19:16:37.673082442 +0000 UTC m=+65.027981377" observedRunningTime="2025-06-20 19:16:38.001260646 +0000 UTC m=+65.356159591" watchObservedRunningTime="2025-06-20 19:16:38.001566517 +0000 UTC m=+65.356465458" Jun 20 19:16:39.091891 containerd[1726]: time="2025-06-20T19:16:39.091844635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" id:\"b1a537d876ae6368d0750c4a754adbec633afd70e988c5453e28c3f5695b3b24\" pid:5523 exit_status:1 exited_at:{seconds:1750446999 nanos:91411886}" Jun 20 19:16:39.161145 containerd[1726]: time="2025-06-20T19:16:39.161098357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:39.164073 containerd[1726]: time="2025-06-20T19:16:39.164034839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:16:39.170902 containerd[1726]: time="2025-06-20T19:16:39.170821145Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:39.177334 containerd[1726]: time="2025-06-20T19:16:39.175931377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:39.178041 containerd[1726]: time="2025-06-20T19:16:39.178005194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.504755586s" Jun 20 19:16:39.178548 containerd[1726]: time="2025-06-20T19:16:39.178051052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:16:39.183264 containerd[1726]: time="2025-06-20T19:16:39.183231130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:16:39.184483 containerd[1726]: time="2025-06-20T19:16:39.184201335Z" level=info msg="CreateContainer within sandbox \"d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:16:39.207158 containerd[1726]: time="2025-06-20T19:16:39.207120642Z" level=info msg="Container 0ed9683c1dc7c8da870981281a0a0f45382864b5970c48d51dc0990c5ea9bc81: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:39.230356 containerd[1726]: time="2025-06-20T19:16:39.230321770Z" level=info msg="CreateContainer within sandbox \"d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0ed9683c1dc7c8da870981281a0a0f45382864b5970c48d51dc0990c5ea9bc81\"" Jun 20 19:16:39.231189 containerd[1726]: time="2025-06-20T19:16:39.231131869Z" level=info msg="StartContainer for \"0ed9683c1dc7c8da870981281a0a0f45382864b5970c48d51dc0990c5ea9bc81\"" Jun 20 19:16:39.234436 containerd[1726]: time="2025-06-20T19:16:39.234367207Z" level=info msg="connecting to shim 0ed9683c1dc7c8da870981281a0a0f45382864b5970c48d51dc0990c5ea9bc81" address="unix:///run/containerd/s/b57adc0681725d0113ffa69aad8b95f5a0c91a36cd64c7cabd897af17fa1244d" protocol=ttrpc version=3 Jun 20 19:16:39.263003 systemd[1]: Started cri-containerd-0ed9683c1dc7c8da870981281a0a0f45382864b5970c48d51dc0990c5ea9bc81.scope - libcontainer container 0ed9683c1dc7c8da870981281a0a0f45382864b5970c48d51dc0990c5ea9bc81. Jun 20 19:16:39.318257 containerd[1726]: time="2025-06-20T19:16:39.318214787Z" level=info msg="StartContainer for \"0ed9683c1dc7c8da870981281a0a0f45382864b5970c48d51dc0990c5ea9bc81\" returns successfully" Jun 20 19:16:39.579705 containerd[1726]: time="2025-06-20T19:16:39.579660346Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:39.581878 containerd[1726]: time="2025-06-20T19:16:39.581838998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:16:39.583311 containerd[1726]: time="2025-06-20T19:16:39.583268351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 399.996301ms" Jun 20 19:16:39.583708 containerd[1726]: time="2025-06-20T19:16:39.583318200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:16:39.585086 containerd[1726]: time="2025-06-20T19:16:39.585058510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:16:39.587067 containerd[1726]: time="2025-06-20T19:16:39.587027688Z" level=info msg="CreateContainer within sandbox \"646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:16:39.605916 containerd[1726]: time="2025-06-20T19:16:39.605883714Z" level=info msg="Container 0fdbc81a8f8f99d181a357074b7e4361c1dad8097ef6c846951760a071516a33: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:39.622720 containerd[1726]: time="2025-06-20T19:16:39.622690629Z" level=info msg="CreateContainer within sandbox \"646ee236562b14e6be123c592f49ce1d5dc3994e1d9c8f2d82188fc4fb71c108\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0fdbc81a8f8f99d181a357074b7e4361c1dad8097ef6c846951760a071516a33\"" Jun 20 19:16:39.624331 containerd[1726]: time="2025-06-20T19:16:39.623244035Z" level=info msg="StartContainer for \"0fdbc81a8f8f99d181a357074b7e4361c1dad8097ef6c846951760a071516a33\"" Jun 20 19:16:39.624420 containerd[1726]: time="2025-06-20T19:16:39.624382260Z" level=info msg="connecting to shim 0fdbc81a8f8f99d181a357074b7e4361c1dad8097ef6c846951760a071516a33" address="unix:///run/containerd/s/c789fea3ff3957b0fc96e5c8b60194b618aa968eccdd97e58208572c2038614e" protocol=ttrpc version=3 Jun 20 19:16:39.646995 systemd[1]: Started cri-containerd-0fdbc81a8f8f99d181a357074b7e4361c1dad8097ef6c846951760a071516a33.scope - libcontainer container 0fdbc81a8f8f99d181a357074b7e4361c1dad8097ef6c846951760a071516a33. Jun 20 19:16:39.695560 containerd[1726]: time="2025-06-20T19:16:39.695514925Z" level=info msg="StartContainer for \"0fdbc81a8f8f99d181a357074b7e4361c1dad8097ef6c846951760a071516a33\" returns successfully" Jun 20 19:16:40.022853 kubelet[3151]: I0620 19:16:40.022028 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6675744cfd-8csx6" podStartSLOduration=46.515756756 podStartE2EDuration="52.022008698s" podCreationTimestamp="2025-06-20 19:15:48 +0000 UTC" firstStartedPulling="2025-06-20 19:16:34.077976574 +0000 UTC m=+61.432875510" lastFinishedPulling="2025-06-20 19:16:39.58422852 +0000 UTC m=+66.939127452" observedRunningTime="2025-06-20 19:16:40.020475958 +0000 UTC m=+67.375374907" watchObservedRunningTime="2025-06-20 19:16:40.022008698 +0000 UTC m=+67.376907644" Jun 20 19:16:40.131640 containerd[1726]: time="2025-06-20T19:16:40.131542500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" id:\"ab2df4868cd89c4a772edd8a0e991d6b5d90a10d7e4be5110f00c03a7a9d09eb\" pid:5610 exit_status:1 exited_at:{seconds:1750447000 nanos:131221085}" Jun 20 19:16:41.000469 kubelet[3151]: I0620 19:16:41.000437 3151 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:16:42.179336 containerd[1726]: time="2025-06-20T19:16:42.179286521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:42.181621 containerd[1726]: time="2025-06-20T19:16:42.181590353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:16:42.184875 containerd[1726]: time="2025-06-20T19:16:42.184811124Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:42.192710 containerd[1726]: time="2025-06-20T19:16:42.192634506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:42.193215 containerd[1726]: time="2025-06-20T19:16:42.193066303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 2.60797325s" Jun 20 19:16:42.193215 containerd[1726]: time="2025-06-20T19:16:42.193100052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:16:42.194350 containerd[1726]: time="2025-06-20T19:16:42.194322981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:16:42.208596 containerd[1726]: time="2025-06-20T19:16:42.207472890Z" level=info msg="CreateContainer within sandbox \"7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:16:42.231630 containerd[1726]: time="2025-06-20T19:16:42.231593120Z" level=info msg="Container 4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:42.235104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298230182.mount: Deactivated successfully. Jun 20 19:16:42.248887 containerd[1726]: time="2025-06-20T19:16:42.248853692Z" level=info msg="CreateContainer within sandbox \"7b7311e5ca078599a69b909a3c6cf1fc7756dafe45cc0a1996f7dc318e519728\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\"" Jun 20 19:16:42.249412 containerd[1726]: time="2025-06-20T19:16:42.249392904Z" level=info msg="StartContainer for \"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\"" Jun 20 19:16:42.250954 containerd[1726]: time="2025-06-20T19:16:42.250893584Z" level=info msg="connecting to shim 4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a" address="unix:///run/containerd/s/b95ffbd6a4eee984bfb918341a202e48e6e4f0108e2444eb9312ce182e36cadc" protocol=ttrpc version=3 Jun 20 19:16:42.274037 systemd[1]: Started cri-containerd-4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a.scope - libcontainer container 4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a. Jun 20 19:16:42.321268 containerd[1726]: time="2025-06-20T19:16:42.321232873Z" level=info msg="StartContainer for \"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\" returns successfully" Jun 20 19:16:43.018821 kubelet[3151]: I0620 19:16:43.018744 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fcd5cccd8-wjrwq" podStartSLOduration=44.952601208 podStartE2EDuration="52.018723285s" podCreationTimestamp="2025-06-20 19:15:51 +0000 UTC" firstStartedPulling="2025-06-20 19:16:35.127778694 +0000 UTC m=+62.482677632" lastFinishedPulling="2025-06-20 19:16:42.193900767 +0000 UTC m=+69.548799709" observedRunningTime="2025-06-20 19:16:43.018398283 +0000 UTC m=+70.373297229" watchObservedRunningTime="2025-06-20 19:16:43.018723285 +0000 UTC m=+70.373622257" Jun 20 19:16:43.056724 containerd[1726]: time="2025-06-20T19:16:43.056678487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\" id:\"1b43415905d078b8c8493160f5ab3f5dff53488e9f6af9078a6f3c2278bd2b60\" pid:5688 exited_at:{seconds:1750447003 nanos:56341704}" Jun 20 19:16:43.914743 containerd[1726]: time="2025-06-20T19:16:43.914696030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:43.917373 containerd[1726]: time="2025-06-20T19:16:43.917331877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:16:43.921136 containerd[1726]: time="2025-06-20T19:16:43.921111037Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:43.924648 containerd[1726]: time="2025-06-20T19:16:43.924582674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:43.926253 containerd[1726]: time="2025-06-20T19:16:43.925573465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 1.731207656s" Jun 20 19:16:43.926253 containerd[1726]: time="2025-06-20T19:16:43.925617726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:16:43.935186 containerd[1726]: time="2025-06-20T19:16:43.935147610Z" level=info msg="CreateContainer within sandbox \"d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:16:43.952847 containerd[1726]: time="2025-06-20T19:16:43.952738161Z" level=info msg="Container 9a7992f32a5b8491d4cb4e4ae034e9c9a4775a5787fd70d43177f6510a3cb3b8: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:43.971198 containerd[1726]: time="2025-06-20T19:16:43.971167347Z" level=info msg="CreateContainer within sandbox \"d4d85abd47b401728e0245562f900bb3993fdf9b0ff85ef43d551ef890e91d89\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9a7992f32a5b8491d4cb4e4ae034e9c9a4775a5787fd70d43177f6510a3cb3b8\"" Jun 20 19:16:43.971769 containerd[1726]: time="2025-06-20T19:16:43.971706473Z" level=info msg="StartContainer for \"9a7992f32a5b8491d4cb4e4ae034e9c9a4775a5787fd70d43177f6510a3cb3b8\"" Jun 20 19:16:43.973292 containerd[1726]: time="2025-06-20T19:16:43.973254500Z" level=info msg="connecting to shim 9a7992f32a5b8491d4cb4e4ae034e9c9a4775a5787fd70d43177f6510a3cb3b8" address="unix:///run/containerd/s/b57adc0681725d0113ffa69aad8b95f5a0c91a36cd64c7cabd897af17fa1244d" protocol=ttrpc version=3 Jun 20 19:16:43.998067 systemd[1]: Started cri-containerd-9a7992f32a5b8491d4cb4e4ae034e9c9a4775a5787fd70d43177f6510a3cb3b8.scope - libcontainer container 9a7992f32a5b8491d4cb4e4ae034e9c9a4775a5787fd70d43177f6510a3cb3b8. Jun 20 19:16:44.035179 containerd[1726]: time="2025-06-20T19:16:44.035138729Z" level=info msg="StartContainer for \"9a7992f32a5b8491d4cb4e4ae034e9c9a4775a5787fd70d43177f6510a3cb3b8\" returns successfully" Jun 20 19:16:44.832841 kubelet[3151]: I0620 19:16:44.832792 3151 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:16:44.832841 kubelet[3151]: I0620 19:16:44.832850 3151 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:16:49.321026 containerd[1726]: time="2025-06-20T19:16:49.320920323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\" id:\"affed10927edb08b1049faf3f9315dd7d10d27dd06731dac4f9e1b063cb993c9\" pid:5754 exited_at:{seconds:1750447009 nanos:320429308}" Jun 20 19:16:49.409352 containerd[1726]: time="2025-06-20T19:16:49.409168074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" id:\"219e39c311c6cfe830dd9ffb528de945f70b54fdb69b379dbd7737e8ed9f19b6\" pid:5775 exited_at:{seconds:1750447009 nanos:408699228}" Jun 20 19:16:49.423461 kubelet[3151]: I0620 19:16:49.423077 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4b5m5" podStartSLOduration=47.640935887 podStartE2EDuration="58.423056167s" podCreationTimestamp="2025-06-20 19:15:51 +0000 UTC" firstStartedPulling="2025-06-20 19:16:33.144862659 +0000 UTC m=+60.499761602" lastFinishedPulling="2025-06-20 19:16:43.926982941 +0000 UTC m=+71.281881882" observedRunningTime="2025-06-20 19:16:45.029740009 +0000 UTC m=+72.384638940" watchObservedRunningTime="2025-06-20 19:16:49.423056167 +0000 UTC m=+76.777955111" Jun 20 19:16:56.125117 containerd[1726]: time="2025-06-20T19:16:56.125061373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\" id:\"af0bd66bfb5e985a992276b4fad6b2748d3f220b83b99392d4eab89a1c122136\" pid:5801 exited_at:{seconds:1750447016 nanos:124753806}" Jun 20 19:17:07.445424 kubelet[3151]: I0620 19:17:07.445313 3151 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:17:07.516219 containerd[1726]: time="2025-06-20T19:17:07.516166047Z" level=info msg="StopContainer for \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" with timeout 30 (s)" Jun 20 19:17:07.517011 containerd[1726]: time="2025-06-20T19:17:07.516966682Z" level=info msg="Stop container \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" with signal terminated" Jun 20 19:17:07.562710 systemd[1]: Created slice kubepods-besteffort-pod6079fbb5_4c9b_4639_bf26_2e74c12a8bc4.slice - libcontainer container kubepods-besteffort-pod6079fbb5_4c9b_4639_bf26_2e74c12a8bc4.slice. Jun 20 19:17:07.571233 systemd[1]: cri-containerd-58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1.scope: Deactivated successfully. Jun 20 19:17:07.576685 containerd[1726]: time="2025-06-20T19:17:07.575805719Z" level=info msg="received exit event container_id:\"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" id:\"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" pid:5022 exit_status:1 exited_at:{seconds:1750447027 nanos:575177147}" Jun 20 19:17:07.578502 containerd[1726]: time="2025-06-20T19:17:07.578473048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" id:\"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" pid:5022 exit_status:1 exited_at:{seconds:1750447027 nanos:575177147}" Jun 20 19:17:07.627920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1-rootfs.mount: Deactivated successfully. Jun 20 19:17:07.742630 kubelet[3151]: I0620 19:17:07.742426 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6079fbb5-4c9b-4639-bf26-2e74c12a8bc4-calico-apiserver-certs\") pod \"calico-apiserver-6675744cfd-sp6q5\" (UID: \"6079fbb5-4c9b-4639-bf26-2e74c12a8bc4\") " pod="calico-apiserver/calico-apiserver-6675744cfd-sp6q5" Jun 20 19:17:07.742630 kubelet[3151]: I0620 19:17:07.742476 3151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fx5c\" (UniqueName: \"kubernetes.io/projected/6079fbb5-4c9b-4639-bf26-2e74c12a8bc4-kube-api-access-9fx5c\") pod \"calico-apiserver-6675744cfd-sp6q5\" (UID: \"6079fbb5-4c9b-4639-bf26-2e74c12a8bc4\") " pod="calico-apiserver/calico-apiserver-6675744cfd-sp6q5" Jun 20 19:17:07.870846 containerd[1726]: time="2025-06-20T19:17:07.869107457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675744cfd-sp6q5,Uid:6079fbb5-4c9b-4639-bf26-2e74c12a8bc4,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:17:08.768369 systemd-networkd[1354]: cali084ab62a069: Link UP Jun 20 19:17:08.768729 systemd-networkd[1354]: cali084ab62a069: Gained carrier Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.671 [INFO][5852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0 calico-apiserver-6675744cfd- calico-apiserver 6079fbb5-4c9b-4639-bf26-2e74c12a8bc4 1215 0 2025-06-20 19:17:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6675744cfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-657d644de8 calico-apiserver-6675744cfd-sp6q5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali084ab62a069 [] [] }} ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-sp6q5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.671 [INFO][5852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-sp6q5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.715 [INFO][5863] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" HandleID="k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.715 [INFO][5863] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" HandleID="k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002acf70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-657d644de8", "pod":"calico-apiserver-6675744cfd-sp6q5", "timestamp":"2025-06-20 19:17:08.715279882 +0000 UTC"}, Hostname:"ci-4344.1.0-a-657d644de8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.715 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.715 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.715 [INFO][5863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-657d644de8' Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.723 [INFO][5863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.730 [INFO][5863] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.736 [INFO][5863] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.738 [INFO][5863] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.740 [INFO][5863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.740 [INFO][5863] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.741 [INFO][5863] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93 Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.749 [INFO][5863] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.759 [INFO][5863] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.10/26] block=192.168.8.0/26 handle="k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.759 [INFO][5863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.10/26] handle="k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" host="ci-4344.1.0-a-657d644de8" Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.759 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:17:08.786540 containerd[1726]: 2025-06-20 19:17:08.759 [INFO][5863] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.10/26] IPv6=[] ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" HandleID="k8s-pod-network.a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" Jun 20 19:17:08.789394 containerd[1726]: 2025-06-20 19:17:08.764 [INFO][5852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-sp6q5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0", GenerateName:"calico-apiserver-6675744cfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6079fbb5-4c9b-4639-bf26-2e74c12a8bc4", ResourceVersion:"1215", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675744cfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"", Pod:"calico-apiserver-6675744cfd-sp6q5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali084ab62a069", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:17:08.789394 containerd[1726]: 2025-06-20 19:17:08.764 [INFO][5852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.10/32] ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-sp6q5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" Jun 20 19:17:08.789394 containerd[1726]: 2025-06-20 19:17:08.764 [INFO][5852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali084ab62a069 ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-sp6q5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" Jun 20 19:17:08.789394 containerd[1726]: 2025-06-20 19:17:08.768 [INFO][5852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-sp6q5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" Jun 20 19:17:08.789394 containerd[1726]: 2025-06-20 19:17:08.769 [INFO][5852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-sp6q5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0", GenerateName:"calico-apiserver-6675744cfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6079fbb5-4c9b-4639-bf26-2e74c12a8bc4", ResourceVersion:"1215", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675744cfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-657d644de8", ContainerID:"a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93", Pod:"calico-apiserver-6675744cfd-sp6q5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali084ab62a069", MAC:"ee:8f:2c:a8:63:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:17:08.789394 containerd[1726]: 2025-06-20 19:17:08.782 [INFO][5852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" Namespace="calico-apiserver" Pod="calico-apiserver-6675744cfd-sp6q5" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--6675744cfd--sp6q5-eth0" Jun 20 19:17:10.379084 systemd-networkd[1354]: cali084ab62a069: Gained IPv6LL Jun 20 19:17:11.067878 containerd[1726]: time="2025-06-20T19:17:11.067819005Z" level=info msg="StopContainer for \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" returns successfully" Jun 20 19:17:11.069841 containerd[1726]: time="2025-06-20T19:17:11.069475394Z" level=info msg="StopPodSandbox for \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\"" Jun 20 19:17:11.069841 containerd[1726]: time="2025-06-20T19:17:11.069555414Z" level=info msg="Container to stop \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:17:11.071361 containerd[1726]: time="2025-06-20T19:17:11.071325764Z" level=info msg="connecting to shim a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93" address="unix:///run/containerd/s/3651f2390388ada6a15b071860d91be72fe9692bd71b4cb020591ad4427beff1" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:17:11.094386 systemd[1]: cri-containerd-3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656.scope: Deactivated successfully. Jun 20 19:17:11.105857 containerd[1726]: time="2025-06-20T19:17:11.105453640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" id:\"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" pid:4756 exit_status:137 exited_at:{seconds:1750447031 nanos:105063830}" Jun 20 19:17:11.113087 systemd[1]: Started cri-containerd-a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93.scope - libcontainer container a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93. Jun 20 19:17:11.176254 containerd[1726]: time="2025-06-20T19:17:11.174898893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675744cfd-sp6q5,Uid:6079fbb5-4c9b-4639-bf26-2e74c12a8bc4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93\"" Jun 20 19:17:11.179854 containerd[1726]: time="2025-06-20T19:17:11.179775997Z" level=info msg="CreateContainer within sandbox \"a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:17:11.187249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656-rootfs.mount: Deactivated successfully. Jun 20 19:17:11.195853 containerd[1726]: time="2025-06-20T19:17:11.195789151Z" level=info msg="shim disconnected" id=3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656 namespace=k8s.io Jun 20 19:17:11.195853 containerd[1726]: time="2025-06-20T19:17:11.195846755Z" level=warning msg="cleaning up after shim disconnected" id=3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656 namespace=k8s.io Jun 20 19:17:11.195980 containerd[1726]: time="2025-06-20T19:17:11.195855659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:17:11.203854 containerd[1726]: time="2025-06-20T19:17:11.202989575Z" level=info msg="Container 0e83eb5a6f71ae9d84a96be92f0532db1bf37a57b75ce1cae80e054a0a349ac0: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:17:11.222114 containerd[1726]: time="2025-06-20T19:17:11.222073526Z" level=info msg="CreateContainer within sandbox \"a70bd42ecdbe8558f45c486f921d4d4468cd9bfd0f099b92cf0e7b87faa6dc93\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e83eb5a6f71ae9d84a96be92f0532db1bf37a57b75ce1cae80e054a0a349ac0\"" Jun 20 19:17:11.223070 containerd[1726]: time="2025-06-20T19:17:11.222628699Z" level=info msg="StartContainer for \"0e83eb5a6f71ae9d84a96be92f0532db1bf37a57b75ce1cae80e054a0a349ac0\"" Jun 20 19:17:11.224567 containerd[1726]: time="2025-06-20T19:17:11.223772416Z" level=info msg="connecting to shim 0e83eb5a6f71ae9d84a96be92f0532db1bf37a57b75ce1cae80e054a0a349ac0" address="unix:///run/containerd/s/3651f2390388ada6a15b071860d91be72fe9692bd71b4cb020591ad4427beff1" protocol=ttrpc version=3 Jun 20 19:17:11.251157 systemd[1]: Started cri-containerd-0e83eb5a6f71ae9d84a96be92f0532db1bf37a57b75ce1cae80e054a0a349ac0.scope - libcontainer container 0e83eb5a6f71ae9d84a96be92f0532db1bf37a57b75ce1cae80e054a0a349ac0. Jun 20 19:17:11.255756 containerd[1726]: time="2025-06-20T19:17:11.255717271Z" level=info msg="received exit event sandbox_id:\"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" exit_status:137 exited_at:{seconds:1750447031 nanos:105063830}" Jun 20 19:17:11.337960 systemd-networkd[1354]: cali424a608b863: Link DOWN Jun 20 19:17:11.339126 systemd-networkd[1354]: cali424a608b863: Lost carrier Jun 20 19:17:11.418366 containerd[1726]: time="2025-06-20T19:17:11.418310769Z" level=info msg="StartContainer for \"0e83eb5a6f71ae9d84a96be92f0532db1bf37a57b75ce1cae80e054a0a349ac0\" returns successfully" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.333 [INFO][5996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.336 [INFO][5996] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" iface="eth0" netns="/var/run/netns/cni-ecb5ea7c-b342-0785-32f3-c62db93db10e" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.336 [INFO][5996] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" iface="eth0" netns="/var/run/netns/cni-ecb5ea7c-b342-0785-32f3-c62db93db10e" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.345 [INFO][5996] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" after=8.826895ms iface="eth0" netns="/var/run/netns/cni-ecb5ea7c-b342-0785-32f3-c62db93db10e" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.345 [INFO][5996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.345 [INFO][5996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.392 [INFO][6003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.392 [INFO][6003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.392 [INFO][6003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.435 [INFO][6003] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.435 [INFO][6003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.436 [INFO][6003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:17:11.440101 containerd[1726]: 2025-06-20 19:17:11.438 [INFO][5996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:11.441225 containerd[1726]: time="2025-06-20T19:17:11.440331948Z" level=info msg="TearDown network for sandbox \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" successfully" Jun 20 19:17:11.441225 containerd[1726]: time="2025-06-20T19:17:11.440362047Z" level=info msg="StopPodSandbox for \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" returns successfully" Jun 20 19:17:11.573043 kubelet[3151]: I0620 19:17:11.573001 3151 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b-calico-apiserver-certs\") pod \"16f3dad5-c3e2-404c-8ef5-6e879cea8e8b\" (UID: \"16f3dad5-c3e2-404c-8ef5-6e879cea8e8b\") " Jun 20 19:17:11.573689 kubelet[3151]: I0620 19:17:11.573058 3151 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hphx8\" (UniqueName: \"kubernetes.io/projected/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b-kube-api-access-hphx8\") pod \"16f3dad5-c3e2-404c-8ef5-6e879cea8e8b\" (UID: \"16f3dad5-c3e2-404c-8ef5-6e879cea8e8b\") " Jun 20 19:17:11.577043 kubelet[3151]: I0620 19:17:11.576974 3151 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b-kube-api-access-hphx8" (OuterVolumeSpecName: "kube-api-access-hphx8") pod "16f3dad5-c3e2-404c-8ef5-6e879cea8e8b" (UID: "16f3dad5-c3e2-404c-8ef5-6e879cea8e8b"). InnerVolumeSpecName "kube-api-access-hphx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:17:11.578989 kubelet[3151]: I0620 19:17:11.578952 3151 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "16f3dad5-c3e2-404c-8ef5-6e879cea8e8b" (UID: "16f3dad5-c3e2-404c-8ef5-6e879cea8e8b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:17:11.673894 kubelet[3151]: I0620 19:17:11.673656 3151 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hphx8\" (UniqueName: \"kubernetes.io/projected/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b-kube-api-access-hphx8\") on node \"ci-4344.1.0-a-657d644de8\" DevicePath \"\"" Jun 20 19:17:11.673894 kubelet[3151]: I0620 19:17:11.673867 3151 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b-calico-apiserver-certs\") on node \"ci-4344.1.0-a-657d644de8\" DevicePath \"\"" Jun 20 19:17:12.057932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656-shm.mount: Deactivated successfully. Jun 20 19:17:12.058138 systemd[1]: run-netns-cni\x2decb5ea7c\x2db342\x2d0785\x2d32f3\x2dc62db93db10e.mount: Deactivated successfully. Jun 20 19:17:12.058212 systemd[1]: var-lib-kubelet-pods-16f3dad5\x2dc3e2\x2d404c\x2d8ef5\x2d6e879cea8e8b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhphx8.mount: Deactivated successfully. Jun 20 19:17:12.058275 systemd[1]: var-lib-kubelet-pods-16f3dad5\x2dc3e2\x2d404c\x2d8ef5\x2d6e879cea8e8b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:17:12.086355 kubelet[3151]: I0620 19:17:12.085863 3151 scope.go:117] "RemoveContainer" containerID="58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1" Jun 20 19:17:12.090379 containerd[1726]: time="2025-06-20T19:17:12.089883355Z" level=info msg="RemoveContainer for \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\"" Jun 20 19:17:12.094726 systemd[1]: Removed slice kubepods-besteffort-pod16f3dad5_c3e2_404c_8ef5_6e879cea8e8b.slice - libcontainer container kubepods-besteffort-pod16f3dad5_c3e2_404c_8ef5_6e879cea8e8b.slice. Jun 20 19:17:12.105673 containerd[1726]: time="2025-06-20T19:17:12.105601360Z" level=info msg="RemoveContainer for \"58b8ec88757af4d8f39f43328452b07d2d251c3e2cd72160f7dce7db11e660a1\" returns successfully" Jun 20 19:17:12.126914 kubelet[3151]: I0620 19:17:12.126861 3151 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6675744cfd-sp6q5" podStartSLOduration=5.126821297 podStartE2EDuration="5.126821297s" podCreationTimestamp="2025-06-20 19:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:17:12.109761222 +0000 UTC m=+99.464660166" watchObservedRunningTime="2025-06-20 19:17:12.126821297 +0000 UTC m=+99.481720243" Jun 20 19:17:12.739874 kubelet[3151]: I0620 19:17:12.738078 3151 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f3dad5-c3e2-404c-8ef5-6e879cea8e8b" path="/var/lib/kubelet/pods/16f3dad5-c3e2-404c-8ef5-6e879cea8e8b/volumes" Jun 20 19:17:12.773410 containerd[1726]: time="2025-06-20T19:17:12.773109789Z" level=info msg="StopContainer for \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" with timeout 30 (s)" Jun 20 19:17:12.774354 containerd[1726]: time="2025-06-20T19:17:12.774276216Z" level=info msg="Stop container \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" with signal terminated" Jun 20 19:17:12.821121 systemd[1]: cri-containerd-41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44.scope: Deactivated successfully. Jun 20 19:17:12.827537 containerd[1726]: time="2025-06-20T19:17:12.827417781Z" level=info msg="received exit event container_id:\"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" id:\"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" pid:5434 exit_status:1 exited_at:{seconds:1750447032 nanos:827079992}" Jun 20 19:17:12.828772 containerd[1726]: time="2025-06-20T19:17:12.828662881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" id:\"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" pid:5434 exit_status:1 exited_at:{seconds:1750447032 nanos:827079992}" Jun 20 19:17:12.866500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44-rootfs.mount: Deactivated successfully. Jun 20 19:17:12.913920 containerd[1726]: time="2025-06-20T19:17:12.913823136Z" level=info msg="StopContainer for \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" returns successfully" Jun 20 19:17:12.914687 containerd[1726]: time="2025-06-20T19:17:12.914574070Z" level=info msg="StopPodSandbox for \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\"" Jun 20 19:17:12.914837 containerd[1726]: time="2025-06-20T19:17:12.914765723Z" level=info msg="Container to stop \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:17:12.926469 containerd[1726]: time="2025-06-20T19:17:12.926435125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" id:\"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" pid:5376 exit_status:137 exited_at:{seconds:1750447032 nanos:926034786}" Jun 20 19:17:12.927018 systemd[1]: cri-containerd-9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb.scope: Deactivated successfully. Jun 20 19:17:12.962962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb-rootfs.mount: Deactivated successfully. Jun 20 19:17:12.963373 containerd[1726]: time="2025-06-20T19:17:12.963182583Z" level=info msg="shim disconnected" id=9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb namespace=k8s.io Jun 20 19:17:12.963373 containerd[1726]: time="2025-06-20T19:17:12.963218532Z" level=warning msg="cleaning up after shim disconnected" id=9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb namespace=k8s.io Jun 20 19:17:12.963373 containerd[1726]: time="2025-06-20T19:17:12.963227182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:17:12.964364 containerd[1726]: time="2025-06-20T19:17:12.963944545Z" level=info msg="received exit event sandbox_id:\"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" exit_status:137 exited_at:{seconds:1750447032 nanos:926034786}" Jun 20 19:17:12.971875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb-shm.mount: Deactivated successfully. Jun 20 19:17:13.038791 systemd-networkd[1354]: cali4d3b5e7df91: Link DOWN Jun 20 19:17:13.038799 systemd-networkd[1354]: cali4d3b5e7df91: Lost carrier Jun 20 19:17:13.104592 kubelet[3151]: I0620 19:17:13.104038 3151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.036 [INFO][6102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.036 [INFO][6102] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" iface="eth0" netns="/var/run/netns/cni-d86f5481-aa17-ee3c-a766-41e1fb262780" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.036 [INFO][6102] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" iface="eth0" netns="/var/run/netns/cni-d86f5481-aa17-ee3c-a766-41e1fb262780" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.049 [INFO][6102] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" after=13.320521ms iface="eth0" netns="/var/run/netns/cni-d86f5481-aa17-ee3c-a766-41e1fb262780" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.050 [INFO][6102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.050 [INFO][6102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.123 [INFO][6116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.123 [INFO][6116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.124 [INFO][6116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.161 [INFO][6116] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.161 [INFO][6116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.162 [INFO][6116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:17:13.165081 containerd[1726]: 2025-06-20 19:17:13.163 [INFO][6102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:13.168732 systemd[1]: run-netns-cni\x2dd86f5481\x2daa17\x2dee3c\x2da766\x2d41e1fb262780.mount: Deactivated successfully. Jun 20 19:17:13.170759 containerd[1726]: time="2025-06-20T19:17:13.169936500Z" level=info msg="TearDown network for sandbox \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" successfully" Jun 20 19:17:13.170759 containerd[1726]: time="2025-06-20T19:17:13.169971706Z" level=info msg="StopPodSandbox for \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" returns successfully" Jun 20 19:17:13.283844 kubelet[3151]: I0620 19:17:13.283789 3151 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tzxb\" (UniqueName: \"kubernetes.io/projected/51986139-459e-4222-b31e-78a26677e7c0-kube-api-access-8tzxb\") pod \"51986139-459e-4222-b31e-78a26677e7c0\" (UID: \"51986139-459e-4222-b31e-78a26677e7c0\") " Jun 20 19:17:13.284849 kubelet[3151]: I0620 19:17:13.284045 3151 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51986139-459e-4222-b31e-78a26677e7c0-calico-apiserver-certs\") pod \"51986139-459e-4222-b31e-78a26677e7c0\" (UID: \"51986139-459e-4222-b31e-78a26677e7c0\") " Jun 20 19:17:13.291916 kubelet[3151]: I0620 19:17:13.289966 3151 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51986139-459e-4222-b31e-78a26677e7c0-kube-api-access-8tzxb" (OuterVolumeSpecName: "kube-api-access-8tzxb") pod "51986139-459e-4222-b31e-78a26677e7c0" (UID: "51986139-459e-4222-b31e-78a26677e7c0"). InnerVolumeSpecName "kube-api-access-8tzxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:17:13.295488 systemd[1]: var-lib-kubelet-pods-51986139\x2d459e\x2d4222\x2db31e\x2d78a26677e7c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8tzxb.mount: Deactivated successfully. Jun 20 19:17:13.296865 kubelet[3151]: I0620 19:17:13.296107 3151 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51986139-459e-4222-b31e-78a26677e7c0-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "51986139-459e-4222-b31e-78a26677e7c0" (UID: "51986139-459e-4222-b31e-78a26677e7c0"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:17:13.304697 systemd[1]: var-lib-kubelet-pods-51986139\x2d459e\x2d4222\x2db31e\x2d78a26677e7c0-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:17:13.385745 kubelet[3151]: I0620 19:17:13.385707 3151 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tzxb\" (UniqueName: \"kubernetes.io/projected/51986139-459e-4222-b31e-78a26677e7c0-kube-api-access-8tzxb\") on node \"ci-4344.1.0-a-657d644de8\" DevicePath \"\"" Jun 20 19:17:13.385745 kubelet[3151]: I0620 19:17:13.385741 3151 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51986139-459e-4222-b31e-78a26677e7c0-calico-apiserver-certs\") on node \"ci-4344.1.0-a-657d644de8\" DevicePath \"\"" Jun 20 19:17:14.114234 systemd[1]: Removed slice kubepods-besteffort-pod51986139_459e_4222_b31e_78a26677e7c0.slice - libcontainer container kubepods-besteffort-pod51986139_459e_4222_b31e_78a26677e7c0.slice. Jun 20 19:17:14.737089 kubelet[3151]: I0620 19:17:14.736918 3151 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51986139-459e-4222-b31e-78a26677e7c0" path="/var/lib/kubelet/pods/51986139-459e-4222-b31e-78a26677e7c0/volumes" Jun 20 19:17:19.335783 containerd[1726]: time="2025-06-20T19:17:19.335730339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\" id:\"91836d992a8b36fb0d42662c1b8a1e11beec39d13c76504f08c1aa1633907873\" pid:6145 exited_at:{seconds:1750447039 nanos:335252928}" Jun 20 19:17:19.470476 containerd[1726]: time="2025-06-20T19:17:19.470427697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" id:\"c3699efc6b249162eb4f7166bd7a15e62904fe3f4ded08217b6ca84408c15175\" pid:6167 exited_at:{seconds:1750447039 nanos:470023493}" Jun 20 19:17:22.161101 containerd[1726]: time="2025-06-20T19:17:22.161051307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" id:\"81a462059fe269dbd5226c0fa777c23b03babcc4fb210dd1261608b6e8576921\" pid:6190 exited_at:{seconds:1750447042 nanos:158971033}" Jun 20 19:17:22.832980 systemd[1]: Started sshd@7-10.200.4.5:22-10.200.16.10:51432.service - OpenSSH per-connection server daemon (10.200.16.10:51432). Jun 20 19:17:23.429199 sshd[6206]: Accepted publickey for core from 10.200.16.10 port 51432 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:23.431053 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:23.437198 systemd-logind[1695]: New session 10 of user core. Jun 20 19:17:23.444028 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:17:23.960206 sshd[6210]: Connection closed by 10.200.16.10 port 51432 Jun 20 19:17:23.961027 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:23.969318 systemd[1]: sshd@7-10.200.4.5:22-10.200.16.10:51432.service: Deactivated successfully. Jun 20 19:17:23.970481 systemd-logind[1695]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:17:23.974161 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:17:23.977591 systemd-logind[1695]: Removed session 10. Jun 20 19:17:24.933776 containerd[1726]: time="2025-06-20T19:17:24.933716530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\" id:\"f5fb164b84da95d4eace7ee2bb0b06cbe4a34d0fb79986eead88db74e8b7e1f9\" pid:6236 exited_at:{seconds:1750447044 nanos:933443220}" Jun 20 19:17:26.125474 containerd[1726]: time="2025-06-20T19:17:26.125418932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\" id:\"fc12b62b2011a40c2c617e8a545a1acf47303b3e5b4afae1e1f78fb9ede4192b\" pid:6258 exited_at:{seconds:1750447046 nanos:125075281}" Jun 20 19:17:29.070123 systemd[1]: Started sshd@8-10.200.4.5:22-10.200.16.10:41300.service - OpenSSH per-connection server daemon (10.200.16.10:41300). Jun 20 19:17:29.673577 sshd[6270]: Accepted publickey for core from 10.200.16.10 port 41300 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:29.674775 sshd-session[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:29.678886 systemd-logind[1695]: New session 11 of user core. Jun 20 19:17:29.685981 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:17:30.206257 sshd[6272]: Connection closed by 10.200.16.10 port 41300 Jun 20 19:17:30.206954 sshd-session[6270]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:30.211883 systemd-logind[1695]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:17:30.212144 systemd[1]: sshd@8-10.200.4.5:22-10.200.16.10:41300.service: Deactivated successfully. Jun 20 19:17:30.214769 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:17:30.217494 systemd-logind[1695]: Removed session 11. Jun 20 19:17:32.730482 kubelet[3151]: I0620 19:17:32.730445 3151 scope.go:117] "RemoveContainer" containerID="41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44" Jun 20 19:17:32.732441 containerd[1726]: time="2025-06-20T19:17:32.732404145Z" level=info msg="RemoveContainer for \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\"" Jun 20 19:17:32.740941 containerd[1726]: time="2025-06-20T19:17:32.740880998Z" level=info msg="RemoveContainer for \"41b3cdaab2c23f62d8a90f51ec597d6b5527589dd3a5c7cc84796edd394bcd44\" returns successfully" Jun 20 19:17:32.742277 containerd[1726]: time="2025-06-20T19:17:32.742228145Z" level=info msg="StopPodSandbox for \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\"" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.775 [WARNING][6293] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.775 [INFO][6293] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.775 [INFO][6293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" iface="eth0" netns="" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.775 [INFO][6293] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.775 [INFO][6293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.795 [INFO][6301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.795 [INFO][6301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.795 [INFO][6301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.799 [WARNING][6301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.799 [INFO][6301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.800 [INFO][6301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:17:32.802760 containerd[1726]: 2025-06-20 19:17:32.801 [INFO][6293] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:32.803389 containerd[1726]: time="2025-06-20T19:17:32.802814605Z" level=info msg="TearDown network for sandbox \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" successfully" Jun 20 19:17:32.803389 containerd[1726]: time="2025-06-20T19:17:32.802860984Z" level=info msg="StopPodSandbox for \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" returns successfully" Jun 20 19:17:32.803564 containerd[1726]: time="2025-06-20T19:17:32.803539916Z" level=info msg="RemovePodSandbox for \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\"" Jun 20 19:17:32.803597 containerd[1726]: time="2025-06-20T19:17:32.803586800Z" level=info msg="Forcibly stopping sandbox \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\"" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.842 [WARNING][6316] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.842 [INFO][6316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.842 [INFO][6316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" iface="eth0" netns="" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.842 [INFO][6316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.842 [INFO][6316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.862 [INFO][6323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.862 [INFO][6323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.863 [INFO][6323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.868 [WARNING][6323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.868 [INFO][6323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" HandleID="k8s-pod-network.3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--4s89b-eth0" Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.869 [INFO][6323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:17:32.872508 containerd[1726]: 2025-06-20 19:17:32.871 [INFO][6316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656" Jun 20 19:17:32.872508 containerd[1726]: time="2025-06-20T19:17:32.872415142Z" level=info msg="TearDown network for sandbox \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" successfully" Jun 20 19:17:32.875415 containerd[1726]: time="2025-06-20T19:17:32.875144300Z" level=info msg="Ensure that sandbox 3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656 in task-service has been cleanup successfully" Jun 20 19:17:32.884245 containerd[1726]: time="2025-06-20T19:17:32.884211964Z" level=info msg="RemovePodSandbox \"3a0eeca5b0b826687e2f5e1847f9382c5f92ad03f7a1ac3acb5911f2013bf656\" returns successfully" Jun 20 19:17:32.884766 containerd[1726]: time="2025-06-20T19:17:32.884745809Z" level=info msg="StopPodSandbox for \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\"" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.921 [WARNING][6338] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.921 [INFO][6338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.921 [INFO][6338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" iface="eth0" netns="" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.921 [INFO][6338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.921 [INFO][6338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.938 [INFO][6345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.938 [INFO][6345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.938 [INFO][6345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.943 [WARNING][6345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.943 [INFO][6345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.944 [INFO][6345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:17:32.946507 containerd[1726]: 2025-06-20 19:17:32.945 [INFO][6338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:32.946969 containerd[1726]: time="2025-06-20T19:17:32.946571847Z" level=info msg="TearDown network for sandbox \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" successfully" Jun 20 19:17:32.946969 containerd[1726]: time="2025-06-20T19:17:32.946605106Z" level=info msg="StopPodSandbox for \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" returns successfully" Jun 20 19:17:32.947206 containerd[1726]: time="2025-06-20T19:17:32.947182358Z" level=info msg="RemovePodSandbox for \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\"" Jun 20 19:17:32.947266 containerd[1726]: time="2025-06-20T19:17:32.947214705Z" level=info msg="Forcibly stopping sandbox \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\"" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.977 [WARNING][6360] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" WorkloadEndpoint="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.977 [INFO][6360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.977 [INFO][6360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" iface="eth0" netns="" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.977 [INFO][6360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.977 [INFO][6360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.993 [INFO][6367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.993 [INFO][6367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.993 [INFO][6367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.998 [WARNING][6367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.998 [INFO][6367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" HandleID="k8s-pod-network.9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Workload="ci--4344.1.0--a--657d644de8-k8s-calico--apiserver--7fc66dd6d7--qdqzw-eth0" Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:32.999 [INFO][6367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:17:33.001947 containerd[1726]: 2025-06-20 19:17:33.000 [INFO][6360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb" Jun 20 19:17:33.001947 containerd[1726]: time="2025-06-20T19:17:33.001112671Z" level=info msg="TearDown network for sandbox \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" successfully" Jun 20 19:17:33.003070 containerd[1726]: time="2025-06-20T19:17:33.003040517Z" level=info msg="Ensure that sandbox 9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb in task-service has been cleanup successfully" Jun 20 19:17:33.020902 containerd[1726]: time="2025-06-20T19:17:33.020858018Z" level=info msg="RemovePodSandbox \"9833bfa4eefd36213babf093ba48d9c1e1c3aff2c08ded6d6e6a9efd8909e1fb\" returns successfully" Jun 20 19:17:35.319122 systemd[1]: Started sshd@9-10.200.4.5:22-10.200.16.10:41302.service - OpenSSH per-connection server daemon (10.200.16.10:41302). Jun 20 19:17:35.915318 sshd[6377]: Accepted publickey for core from 10.200.16.10 port 41302 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:35.916557 sshd-session[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:35.920586 systemd-logind[1695]: New session 12 of user core. Jun 20 19:17:35.927951 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:17:36.415847 sshd[6379]: Connection closed by 10.200.16.10 port 41302 Jun 20 19:17:36.416205 sshd-session[6377]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:36.421982 systemd[1]: sshd@9-10.200.4.5:22-10.200.16.10:41302.service: Deactivated successfully. Jun 20 19:17:36.426170 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:17:36.427898 systemd-logind[1695]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:17:36.430311 systemd-logind[1695]: Removed session 12. Jun 20 19:17:36.535122 systemd[1]: Started sshd@10-10.200.4.5:22-10.200.16.10:41314.service - OpenSSH per-connection server daemon (10.200.16.10:41314). Jun 20 19:17:37.149402 sshd[6392]: Accepted publickey for core from 10.200.16.10 port 41314 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:37.150606 sshd-session[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:37.155294 systemd-logind[1695]: New session 13 of user core. Jun 20 19:17:37.159200 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:17:37.654680 sshd[6394]: Connection closed by 10.200.16.10 port 41314 Jun 20 19:17:37.655716 sshd-session[6392]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:37.661566 systemd[1]: sshd@10-10.200.4.5:22-10.200.16.10:41314.service: Deactivated successfully. Jun 20 19:17:37.664565 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:17:37.665776 systemd-logind[1695]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:17:37.668469 systemd-logind[1695]: Removed session 13. Jun 20 19:17:37.766210 systemd[1]: Started sshd@11-10.200.4.5:22-10.200.16.10:41320.service - OpenSSH per-connection server daemon (10.200.16.10:41320). Jun 20 19:17:38.376172 sshd[6406]: Accepted publickey for core from 10.200.16.10 port 41320 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:38.377729 sshd-session[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:38.384754 systemd-logind[1695]: New session 14 of user core. Jun 20 19:17:38.389999 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:17:38.844656 sshd[6408]: Connection closed by 10.200.16.10 port 41320 Jun 20 19:17:38.847212 sshd-session[6406]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:38.852897 systemd-logind[1695]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:17:38.854591 systemd[1]: sshd@11-10.200.4.5:22-10.200.16.10:41320.service: Deactivated successfully. Jun 20 19:17:38.859036 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:17:38.863671 systemd-logind[1695]: Removed session 14. Jun 20 19:17:43.964391 systemd[1]: Started sshd@12-10.200.4.5:22-10.200.16.10:57580.service - OpenSSH per-connection server daemon (10.200.16.10:57580). Jun 20 19:17:44.553376 sshd[6424]: Accepted publickey for core from 10.200.16.10 port 57580 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:44.554612 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:44.559523 systemd-logind[1695]: New session 15 of user core. Jun 20 19:17:44.566994 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:17:45.023909 sshd[6426]: Connection closed by 10.200.16.10 port 57580 Jun 20 19:17:45.024516 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:45.027467 systemd[1]: sshd@12-10.200.4.5:22-10.200.16.10:57580.service: Deactivated successfully. Jun 20 19:17:45.029437 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:17:45.031005 systemd-logind[1695]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:17:45.032216 systemd-logind[1695]: Removed session 15. Jun 20 19:17:49.322606 containerd[1726]: time="2025-06-20T19:17:49.322511699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\" id:\"c761865106a907cdadf3c714c5b3c3c2863284b2b6ddf83942ce3f44a51a845a\" pid:6456 exited_at:{seconds:1750447069 nanos:322266018}" Jun 20 19:17:49.405816 containerd[1726]: time="2025-06-20T19:17:49.405771823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" id:\"1afbd64326139c07dd4f2fcfc7ba0a21638e2eda553109584e3ec930d20d7fb6\" pid:6479 exited_at:{seconds:1750447069 nanos:405387971}" Jun 20 19:17:50.137629 systemd[1]: Started sshd@13-10.200.4.5:22-10.200.16.10:45998.service - OpenSSH per-connection server daemon (10.200.16.10:45998). Jun 20 19:17:50.729469 sshd[6491]: Accepted publickey for core from 10.200.16.10 port 45998 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:50.730730 sshd-session[6491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:50.737237 systemd-logind[1695]: New session 16 of user core. Jun 20 19:17:50.740036 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:17:51.202705 sshd[6493]: Connection closed by 10.200.16.10 port 45998 Jun 20 19:17:51.203319 sshd-session[6491]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:51.206891 systemd[1]: sshd@13-10.200.4.5:22-10.200.16.10:45998.service: Deactivated successfully. Jun 20 19:17:51.208688 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:17:51.209582 systemd-logind[1695]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:17:51.211210 systemd-logind[1695]: Removed session 16. Jun 20 19:17:56.142060 containerd[1726]: time="2025-06-20T19:17:56.142014636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\" id:\"a770d56a1b62af1b3d1344e0a4ed54e15362d3dafe3c13d5d1db87ee73b37de0\" pid:6519 exited_at:{seconds:1750447076 nanos:141459675}" Jun 20 19:17:56.313087 systemd[1]: Started sshd@14-10.200.4.5:22-10.200.16.10:46012.service - OpenSSH per-connection server daemon (10.200.16.10:46012). Jun 20 19:17:56.914002 sshd[6531]: Accepted publickey for core from 10.200.16.10 port 46012 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:56.915877 sshd-session[6531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:56.921146 systemd-logind[1695]: New session 17 of user core. Jun 20 19:17:56.929415 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:17:57.390805 sshd[6533]: Connection closed by 10.200.16.10 port 46012 Jun 20 19:17:57.391643 sshd-session[6531]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:57.395212 systemd[1]: sshd@14-10.200.4.5:22-10.200.16.10:46012.service: Deactivated successfully. Jun 20 19:17:57.397132 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:17:57.398346 systemd-logind[1695]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:17:57.399641 systemd-logind[1695]: Removed session 17. Jun 20 19:17:57.497638 systemd[1]: Started sshd@15-10.200.4.5:22-10.200.16.10:46016.service - OpenSSH per-connection server daemon (10.200.16.10:46016). Jun 20 19:17:58.098653 sshd[6545]: Accepted publickey for core from 10.200.16.10 port 46016 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:58.100887 sshd-session[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:58.106962 systemd-logind[1695]: New session 18 of user core. Jun 20 19:17:58.114707 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:17:58.714941 sshd[6547]: Connection closed by 10.200.16.10 port 46016 Jun 20 19:17:58.715266 sshd-session[6545]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:58.721837 systemd[1]: sshd@15-10.200.4.5:22-10.200.16.10:46016.service: Deactivated successfully. Jun 20 19:17:58.722897 systemd-logind[1695]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:17:58.725727 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:17:58.731419 systemd-logind[1695]: Removed session 18. Jun 20 19:17:58.820406 systemd[1]: Started sshd@16-10.200.4.5:22-10.200.16.10:47188.service - OpenSSH per-connection server daemon (10.200.16.10:47188). Jun 20 19:17:59.421366 sshd[6557]: Accepted publickey for core from 10.200.16.10 port 47188 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:59.422564 sshd-session[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:59.427247 systemd-logind[1695]: New session 19 of user core. Jun 20 19:17:59.431029 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:18:02.417696 sshd[6559]: Connection closed by 10.200.16.10 port 47188 Jun 20 19:18:02.417162 sshd-session[6557]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:02.421071 systemd[1]: sshd@16-10.200.4.5:22-10.200.16.10:47188.service: Deactivated successfully. Jun 20 19:18:02.425145 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:18:02.425510 systemd[1]: session-19.scope: Consumed 583ms CPU time, 91.6M memory peak. Jun 20 19:18:02.427567 systemd-logind[1695]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:18:02.429873 systemd-logind[1695]: Removed session 19. Jun 20 19:18:02.532098 systemd[1]: Started sshd@17-10.200.4.5:22-10.200.16.10:47200.service - OpenSSH per-connection server daemon (10.200.16.10:47200). Jun 20 19:18:03.148983 sshd[6576]: Accepted publickey for core from 10.200.16.10 port 47200 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:03.150887 sshd-session[6576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:03.160695 systemd-logind[1695]: New session 20 of user core. Jun 20 19:18:03.167262 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:18:03.791695 sshd[6586]: Connection closed by 10.200.16.10 port 47200 Jun 20 19:18:03.793541 sshd-session[6576]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:03.797480 systemd-logind[1695]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:18:03.799551 systemd[1]: sshd@17-10.200.4.5:22-10.200.16.10:47200.service: Deactivated successfully. Jun 20 19:18:03.802623 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:18:03.804610 systemd-logind[1695]: Removed session 20. Jun 20 19:18:03.901347 systemd[1]: Started sshd@18-10.200.4.5:22-10.200.16.10:47202.service - OpenSSH per-connection server daemon (10.200.16.10:47202). Jun 20 19:18:04.497362 sshd[6596]: Accepted publickey for core from 10.200.16.10 port 47202 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:04.499231 sshd-session[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:04.504404 systemd-logind[1695]: New session 21 of user core. Jun 20 19:18:04.509046 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:18:05.002278 sshd[6598]: Connection closed by 10.200.16.10 port 47202 Jun 20 19:18:05.004049 sshd-session[6596]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:05.008525 systemd[1]: sshd@18-10.200.4.5:22-10.200.16.10:47202.service: Deactivated successfully. Jun 20 19:18:05.008817 systemd-logind[1695]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:18:05.013660 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:18:05.017209 systemd-logind[1695]: Removed session 21. Jun 20 19:18:10.109099 systemd[1]: Started sshd@19-10.200.4.5:22-10.200.16.10:39444.service - OpenSSH per-connection server daemon (10.200.16.10:39444). Jun 20 19:18:10.745536 sshd[6629]: Accepted publickey for core from 10.200.16.10 port 39444 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:10.746990 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:10.751159 systemd-logind[1695]: New session 22 of user core. Jun 20 19:18:10.757016 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:18:11.291032 sshd[6631]: Connection closed by 10.200.16.10 port 39444 Jun 20 19:18:11.293954 sshd-session[6629]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:11.297513 systemd-logind[1695]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:18:11.300206 systemd[1]: sshd@19-10.200.4.5:22-10.200.16.10:39444.service: Deactivated successfully. Jun 20 19:18:11.303240 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:18:11.309310 systemd-logind[1695]: Removed session 22. Jun 20 19:18:16.395213 systemd[1]: Started sshd@20-10.200.4.5:22-10.200.16.10:39454.service - OpenSSH per-connection server daemon (10.200.16.10:39454). Jun 20 19:18:16.988278 sshd[6642]: Accepted publickey for core from 10.200.16.10 port 39454 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:16.989609 sshd-session[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:16.993813 systemd-logind[1695]: New session 23 of user core. Jun 20 19:18:16.998997 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:18:17.895348 sshd[6644]: Connection closed by 10.200.16.10 port 39454 Jun 20 19:18:17.895972 sshd-session[6642]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:17.899428 systemd[1]: sshd@20-10.200.4.5:22-10.200.16.10:39454.service: Deactivated successfully. Jun 20 19:18:17.901306 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:18:17.902127 systemd-logind[1695]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:18:17.903390 systemd-logind[1695]: Removed session 23. Jun 20 19:18:19.320347 containerd[1726]: time="2025-06-20T19:18:19.320289860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\" id:\"dee9710313ff9352dcf732ec66d57fee4f7207aa37e64d4400b65a6630f95801\" pid:6667 exited_at:{seconds:1750447099 nanos:319948475}" Jun 20 19:18:19.413067 containerd[1726]: time="2025-06-20T19:18:19.413006072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" id:\"bc423af0f126ff653b501bde272bdc1e7e906cc3b3d6245f7d2278d4b6e27bd8\" pid:6689 exited_at:{seconds:1750447099 nanos:412644736}" Jun 20 19:18:21.970170 containerd[1726]: time="2025-06-20T19:18:21.970094247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8b62103f1f1b81c90ddbbcd1e4087b00341699251a8d2d37e0e8c66e97538af\" id:\"588ff36d59a220239a47a3c278a6e17563461d9ce867d96339f0126a8a5e30e8\" pid:6713 exited_at:{seconds:1750447101 nanos:969678433}" Jun 20 19:18:23.005178 systemd[1]: Started sshd@21-10.200.4.5:22-10.200.16.10:38160.service - OpenSSH per-connection server daemon (10.200.16.10:38160). Jun 20 19:18:23.600648 sshd[6724]: Accepted publickey for core from 10.200.16.10 port 38160 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:23.601872 sshd-session[6724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:23.606704 systemd-logind[1695]: New session 24 of user core. Jun 20 19:18:23.611988 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:18:24.067328 sshd[6726]: Connection closed by 10.200.16.10 port 38160 Jun 20 19:18:24.067996 sshd-session[6724]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:24.071449 systemd[1]: sshd@21-10.200.4.5:22-10.200.16.10:38160.service: Deactivated successfully. Jun 20 19:18:24.073471 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:18:24.074326 systemd-logind[1695]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:18:24.076000 systemd-logind[1695]: Removed session 24. Jun 20 19:18:24.929542 containerd[1726]: time="2025-06-20T19:18:24.929448374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c2da2dd11d7280278f2e7dcf56ecead1249c212ec2012e3c2c234b3e8264a8a\" id:\"2c4f29f6d9a02e3494c3723a71846dfcca073664505e4cf2c765d8dba20772f8\" pid:6749 exited_at:{seconds:1750447104 nanos:929143302}" Jun 20 19:18:26.142067 containerd[1726]: time="2025-06-20T19:18:26.141958783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d933b96dabc45903f8c9443835ccebc96daeec1e6159b99df5e62de2b9e981d\" id:\"1151607e18bc62576115eccbd9d6c23e3bca4b16e8b3a95d551399ae3acaf289\" pid:6771 exited_at:{seconds:1750447106 nanos:139887585}" Jun 20 19:18:29.176221 systemd[1]: Started sshd@22-10.200.4.5:22-10.200.16.10:46930.service - OpenSSH per-connection server daemon (10.200.16.10:46930). Jun 20 19:18:29.766555 sshd[6784]: Accepted publickey for core from 10.200.16.10 port 46930 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:29.767867 sshd-session[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:29.772557 systemd-logind[1695]: New session 25 of user core. Jun 20 19:18:29.778036 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:18:30.239207 sshd[6786]: Connection closed by 10.200.16.10 port 46930 Jun 20 19:18:30.239818 sshd-session[6784]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:30.243538 systemd[1]: sshd@22-10.200.4.5:22-10.200.16.10:46930.service: Deactivated successfully. Jun 20 19:18:30.245410 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:18:30.246297 systemd-logind[1695]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:18:30.247728 systemd-logind[1695]: Removed session 25. Jun 20 19:18:35.346237 systemd[1]: Started sshd@23-10.200.4.5:22-10.200.16.10:46944.service - OpenSSH per-connection server daemon (10.200.16.10:46944). Jun 20 19:18:35.945047 sshd[6803]: Accepted publickey for core from 10.200.16.10 port 46944 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:35.946484 sshd-session[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:35.950404 systemd-logind[1695]: New session 26 of user core. Jun 20 19:18:35.957985 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:18:36.413868 sshd[6806]: Connection closed by 10.200.16.10 port 46944 Jun 20 19:18:36.414458 sshd-session[6803]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:36.417417 systemd[1]: sshd@23-10.200.4.5:22-10.200.16.10:46944.service: Deactivated successfully. Jun 20 19:18:36.419456 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:18:36.420398 systemd-logind[1695]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:18:36.422542 systemd-logind[1695]: Removed session 26.