Oct 13 05:50:55.993663 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Oct 12 22:37:12 -00 2025 Oct 13 05:50:55.993692 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:50:55.993704 kernel: BIOS-provided physical RAM map: Oct 13 05:50:55.993711 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:50:55.993718 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Oct 13 05:50:55.993726 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Oct 13 05:50:55.993734 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Oct 13 05:50:55.993743 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Oct 13 05:50:55.993750 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Oct 13 05:50:55.993757 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Oct 13 05:50:55.993764 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Oct 13 05:50:55.993771 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Oct 13 05:50:55.993778 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Oct 13 05:50:55.993786 kernel: printk: legacy bootconsole [earlyser0] enabled Oct 13 05:50:55.993797 kernel: NX (Execute Disable) protection: active Oct 13 05:50:55.993805 kernel: APIC: Static calls initialized Oct 13 05:50:55.993812 kernel: efi: EFI v2.7 by Microsoft Oct 13 05:50:55.993821 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Oct 13 05:50:55.993828 kernel: random: crng init done Oct 13 05:50:55.993837 kernel: secureboot: Secure boot disabled Oct 13 05:50:55.993844 kernel: SMBIOS 3.1.0 present. Oct 13 05:50:55.993852 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Oct 13 05:50:55.993860 kernel: DMI: Memory slots populated: 2/2 Oct 13 05:50:55.993869 kernel: Hypervisor detected: Microsoft Hyper-V Oct 13 05:50:55.993913 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Oct 13 05:50:55.993921 kernel: Hyper-V: Nested features: 0x3e0101 Oct 13 05:50:55.993928 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Oct 13 05:50:55.993936 kernel: Hyper-V: Using hypercall for remote TLB flush Oct 13 05:50:55.993944 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 13 05:50:55.993951 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 13 05:50:55.993959 kernel: tsc: Detected 2299.998 MHz processor Oct 13 05:50:55.993966 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:50:55.993975 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:50:55.993983 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Oct 13 05:50:55.993994 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:50:55.994002 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:50:55.994010 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Oct 13 05:50:55.994018 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Oct 13 05:50:55.994025 kernel: Using GB pages for direct mapping Oct 13 05:50:55.994033 kernel: ACPI: Early table checksum verification disabled Oct 13 05:50:55.994045 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Oct 13 05:50:55.994054 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:50:55.994062 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:50:55.994070 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Oct 13 05:50:55.994078 kernel: ACPI: FACS 0x000000003FFFE000 000040 Oct 13 05:50:55.994086 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:50:55.994094 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:50:55.994105 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:50:55.994112 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Oct 13 05:50:55.994120 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Oct 13 05:50:55.994128 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 05:50:55.994136 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Oct 13 05:50:55.994143 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Oct 13 05:50:55.994151 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Oct 13 05:50:55.994159 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Oct 13 05:50:55.994168 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Oct 13 05:50:55.994177 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Oct 13 05:50:55.994185 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Oct 13 05:50:55.994193 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Oct 13 05:50:55.994200 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Oct 13 05:50:55.994208 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Oct 13 05:50:55.994216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Oct 13 05:50:55.994225 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Oct 13 05:50:55.994233 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Oct 13 05:50:55.994241 kernel: Zone ranges: Oct 13 05:50:55.994251 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:50:55.994259 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 13 05:50:55.994267 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Oct 13 05:50:55.994275 kernel: Device empty Oct 13 05:50:55.994283 kernel: Movable zone start for each node Oct 13 05:50:55.994291 kernel: Early memory node ranges Oct 13 05:50:55.994300 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 13 05:50:55.994307 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Oct 13 05:50:55.994316 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Oct 13 05:50:55.994325 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Oct 13 05:50:55.994333 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Oct 13 05:50:55.994341 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Oct 13 05:50:55.994349 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:50:55.994357 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 13 05:50:55.994366 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Oct 13 05:50:55.994374 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Oct 13 05:50:55.994382 kernel: ACPI: PM-Timer IO Port: 0x408 Oct 13 05:50:55.994391 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:50:55.994401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:50:55.994409 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:50:55.994417 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Oct 13 05:50:55.994432 kernel: TSC deadline timer available Oct 13 05:50:55.994440 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:50:55.994449 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:50:55.994457 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:50:55.994465 kernel: CPU topo: Max. threads per core: 2 Oct 13 05:50:55.994473 kernel: CPU topo: Num. cores per package: 1 Oct 13 05:50:55.994483 kernel: CPU topo: Num. threads per package: 2 Oct 13 05:50:55.994492 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 13 05:50:55.994500 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Oct 13 05:50:55.994508 kernel: Booting paravirtualized kernel on Hyper-V Oct 13 05:50:55.994517 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:50:55.994526 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 13 05:50:55.994534 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 13 05:50:55.994543 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 13 05:50:55.994551 kernel: pcpu-alloc: [0] 0 1 Oct 13 05:50:55.994561 kernel: Hyper-V: PV spinlocks enabled Oct 13 05:50:55.994570 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:50:55.994580 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:50:55.994589 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:50:55.994598 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 13 05:50:55.994606 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:50:55.994615 kernel: Fallback order for Node 0: 0 Oct 13 05:50:55.994623 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Oct 13 05:50:55.994633 kernel: Policy zone: Normal Oct 13 05:50:55.994641 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:50:55.994649 kernel: software IO TLB: area num 2. Oct 13 05:50:55.994658 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 05:50:55.994666 kernel: ftrace: allocating 40139 entries in 157 pages Oct 13 05:50:55.994675 kernel: ftrace: allocated 157 pages with 5 groups Oct 13 05:50:55.994683 kernel: Dynamic Preempt: voluntary Oct 13 05:50:55.994691 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:50:55.994701 kernel: rcu: RCU event tracing is enabled. Oct 13 05:50:55.994718 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 05:50:55.994727 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:50:55.994736 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:50:55.994747 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:50:55.994756 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:50:55.994765 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 05:50:55.994774 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:50:55.994783 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:50:55.994792 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:50:55.994801 kernel: Using NULL legacy PIC Oct 13 05:50:55.994812 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Oct 13 05:50:55.994821 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:50:55.994830 kernel: Console: colour dummy device 80x25 Oct 13 05:50:55.994839 kernel: printk: legacy console [tty1] enabled Oct 13 05:50:55.994848 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:50:55.994857 kernel: printk: legacy bootconsole [earlyser0] disabled Oct 13 05:50:55.994865 kernel: ACPI: Core revision 20240827 Oct 13 05:50:55.994992 kernel: Failed to register legacy timer interrupt Oct 13 05:50:55.995006 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:50:55.995014 kernel: x2apic enabled Oct 13 05:50:55.995022 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:50:55.995030 kernel: Hyper-V: Host Build 10.0.26100.1381-1-0 Oct 13 05:50:55.995039 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 13 05:50:55.995047 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Oct 13 05:50:55.995056 kernel: Hyper-V: Using IPI hypercalls Oct 13 05:50:55.995065 kernel: APIC: send_IPI() replaced with hv_send_ipi() Oct 13 05:50:55.995075 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Oct 13 05:50:55.995084 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Oct 13 05:50:55.995092 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Oct 13 05:50:55.995100 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Oct 13 05:50:55.995108 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Oct 13 05:50:55.995117 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 13 05:50:55.995125 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Oct 13 05:50:55.995134 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:50:55.995142 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 13 05:50:55.995152 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 13 05:50:55.995160 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:50:55.995168 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:50:55.995176 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:50:55.995185 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 13 05:50:55.995194 kernel: RETBleed: Vulnerable Oct 13 05:50:55.995202 kernel: Speculative Store Bypass: Vulnerable Oct 13 05:50:55.995210 kernel: active return thunk: its_return_thunk Oct 13 05:50:55.995218 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 13 05:50:55.995227 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:50:55.995234 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:50:55.995244 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:50:55.995253 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 13 05:50:55.995260 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 13 05:50:55.995269 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 13 05:50:55.995276 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Oct 13 05:50:55.995285 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Oct 13 05:50:55.995294 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Oct 13 05:50:55.995302 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:50:55.995311 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 13 05:50:55.995319 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 13 05:50:55.995329 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 13 05:50:55.995363 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Oct 13 05:50:55.995372 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Oct 13 05:50:55.995381 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Oct 13 05:50:55.995389 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Oct 13 05:50:55.995398 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:50:55.995407 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:50:55.995415 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:50:55.995423 kernel: landlock: Up and running. Oct 13 05:50:55.995432 kernel: SELinux: Initializing. Oct 13 05:50:55.995440 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:50:55.995449 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:50:55.995460 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Oct 13 05:50:55.995469 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Oct 13 05:50:55.995477 kernel: signal: max sigframe size: 11952 Oct 13 05:50:55.995486 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:50:55.995496 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:50:55.995505 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:50:55.995514 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 13 05:50:55.995523 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:50:55.995532 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:50:55.995543 kernel: .... node #0, CPUs: #1 Oct 13 05:50:55.995552 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 05:50:55.995560 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 13 05:50:55.995569 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2443K rwdata, 10000K rodata, 54096K init, 2852K bss, 299988K reserved, 0K cma-reserved) Oct 13 05:50:55.995577 kernel: devtmpfs: initialized Oct 13 05:50:55.995586 kernel: x86/mm: Memory block size: 128MB Oct 13 05:50:55.995594 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Oct 13 05:50:55.995603 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:50:55.995611 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 05:50:55.995622 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:50:55.995631 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:50:55.995639 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:50:55.995648 kernel: audit: type=2000 audit(1760334652.030:1): state=initialized audit_enabled=0 res=1 Oct 13 05:50:55.995657 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:50:55.995666 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:50:55.995674 kernel: cpuidle: using governor menu Oct 13 05:50:55.995683 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:50:55.995692 kernel: dca service started, version 1.12.1 Oct 13 05:50:55.995703 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Oct 13 05:50:55.995711 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Oct 13 05:50:55.995720 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:50:55.995728 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:50:55.995738 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:50:55.995746 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:50:55.995754 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:50:55.995763 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:50:55.995772 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:50:55.995783 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:50:55.995792 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:50:55.995800 kernel: ACPI: Interpreter enabled Oct 13 05:50:55.995809 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:50:55.995817 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:50:55.995825 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:50:55.995833 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 13 05:50:55.995841 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Oct 13 05:50:55.995849 kernel: iommu: Default domain type: Translated Oct 13 05:50:55.995859 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:50:55.995867 kernel: efivars: Registered efivars operations Oct 13 05:50:55.995875 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:50:55.997368 kernel: PCI: System does not support PCI Oct 13 05:50:55.997384 kernel: vgaarb: loaded Oct 13 05:50:55.997394 kernel: clocksource: Switched to clocksource tsc-early Oct 13 05:50:55.997404 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:50:55.997414 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:50:55.997424 kernel: pnp: PnP ACPI init Oct 13 05:50:55.997436 kernel: pnp: PnP ACPI: found 3 devices Oct 13 05:50:55.997446 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:50:55.997455 kernel: NET: Registered PF_INET protocol family Oct 13 05:50:55.997465 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 05:50:55.997474 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 13 05:50:55.997483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:50:55.997493 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:50:55.997502 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 13 05:50:55.997511 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 13 05:50:55.997522 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 13 05:50:55.997532 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 13 05:50:55.997541 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:50:55.997551 kernel: NET: Registered PF_XDP protocol family Oct 13 05:50:55.997560 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:50:55.997569 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 13 05:50:55.997578 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Oct 13 05:50:55.997588 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Oct 13 05:50:55.997597 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Oct 13 05:50:55.997608 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 13 05:50:55.997618 kernel: clocksource: Switched to clocksource tsc Oct 13 05:50:55.997627 kernel: Initialise system trusted keyrings Oct 13 05:50:55.997636 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 13 05:50:55.997646 kernel: Key type asymmetric registered Oct 13 05:50:55.997655 kernel: Asymmetric key parser 'x509' registered Oct 13 05:50:55.997664 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:50:55.997673 kernel: io scheduler mq-deadline registered Oct 13 05:50:55.997682 kernel: io scheduler kyber registered Oct 13 05:50:55.997693 kernel: io scheduler bfq registered Oct 13 05:50:55.997702 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:50:55.997712 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:50:55.997721 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:50:55.997730 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 13 05:50:55.997740 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:50:55.997749 kernel: i8042: PNP: No PS/2 controller found. Oct 13 05:50:55.997907 kernel: rtc_cmos 00:02: registered as rtc0 Oct 13 05:50:55.997995 kernel: rtc_cmos 00:02: setting system clock to 2025-10-13T05:50:55 UTC (1760334655) Oct 13 05:50:55.998069 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Oct 13 05:50:55.998081 kernel: intel_pstate: Intel P-state driver initializing Oct 13 05:50:55.998091 kernel: efifb: probing for efifb Oct 13 05:50:55.998100 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 13 05:50:55.998110 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 13 05:50:55.998119 kernel: efifb: scrolling: redraw Oct 13 05:50:55.998128 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:50:55.998139 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 05:50:55.998149 kernel: fb0: EFI VGA frame buffer device Oct 13 05:50:55.998158 kernel: pstore: Using crash dump compression: deflate Oct 13 05:50:55.998168 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:50:55.998177 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:50:55.998187 kernel: Segment Routing with IPv6 Oct 13 05:50:55.998195 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:50:55.998205 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:50:55.998214 kernel: Key type dns_resolver registered Oct 13 05:50:55.998223 kernel: IPI shorthand broadcast: enabled Oct 13 05:50:55.998234 kernel: sched_clock: Marking stable (3127004537, 100507652)->(3585845684, -358333495) Oct 13 05:50:55.998243 kernel: registered taskstats version 1 Oct 13 05:50:55.998252 kernel: Loading compiled-in X.509 certificates Oct 13 05:50:55.998262 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: d8dbf4abead15098249886d373d42a3af4f50ccd' Oct 13 05:50:55.998271 kernel: Demotion targets for Node 0: null Oct 13 05:50:55.998280 kernel: Key type .fscrypt registered Oct 13 05:50:55.998290 kernel: Key type fscrypt-provisioning registered Oct 13 05:50:55.998300 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:50:55.998311 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:50:55.998320 kernel: ima: No architecture policies found Oct 13 05:50:55.998329 kernel: clk: Disabling unused clocks Oct 13 05:50:55.998339 kernel: Warning: unable to open an initial console. Oct 13 05:50:55.998348 kernel: Freeing unused kernel image (initmem) memory: 54096K Oct 13 05:50:55.998358 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:50:55.998367 kernel: Freeing unused kernel image (rodata/data gap) memory: 240K Oct 13 05:50:55.998376 kernel: Run /init as init process Oct 13 05:50:55.998385 kernel: with arguments: Oct 13 05:50:55.998396 kernel: /init Oct 13 05:50:55.998405 kernel: with environment: Oct 13 05:50:55.998414 kernel: HOME=/ Oct 13 05:50:55.998423 kernel: TERM=linux Oct 13 05:50:55.998432 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:50:55.998443 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:50:55.998456 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:50:55.998467 systemd[1]: Detected virtualization microsoft. Oct 13 05:50:55.998478 systemd[1]: Detected architecture x86-64. Oct 13 05:50:55.998488 systemd[1]: Running in initrd. Oct 13 05:50:55.998498 systemd[1]: No hostname configured, using default hostname. Oct 13 05:50:55.998508 systemd[1]: Hostname set to . Oct 13 05:50:55.998517 systemd[1]: Initializing machine ID from random generator. Oct 13 05:50:55.998527 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:50:55.998537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:50:55.998547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:50:55.998560 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:50:55.998570 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:50:55.998579 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:50:55.998590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:50:55.998601 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 05:50:55.998611 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 05:50:55.998621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:50:55.998633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:50:55.998643 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:50:55.998653 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:50:55.998662 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:50:55.998672 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:50:55.998682 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:50:55.998691 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:50:55.998701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:50:55.998713 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:50:55.998723 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:50:55.998733 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:50:55.998743 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:50:55.998753 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:50:55.998762 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:50:55.998772 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:50:55.998782 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:50:55.998792 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:50:55.998804 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:50:55.998814 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:50:55.998834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:50:55.998846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:50:55.998871 systemd-journald[204]: Collecting audit messages is disabled. Oct 13 05:50:55.998931 systemd-journald[204]: Journal started Oct 13 05:50:55.998959 systemd-journald[204]: Runtime Journal (/run/log/journal/6be33ebec6a040a2916f616ae40aab9a) is 8M, max 158.9M, 150.9M free. Oct 13 05:50:56.002895 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:50:56.006435 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:50:56.008330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:50:56.012025 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:50:56.015100 systemd-modules-load[206]: Inserted module 'overlay' Oct 13 05:50:56.021647 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:50:56.027647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:50:56.044147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:50:56.049592 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:50:56.055506 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:50:56.059409 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:50:56.060043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:50:56.065838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:50:56.074116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:50:56.084975 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:50:56.087152 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:50:56.092506 systemd-modules-load[206]: Inserted module 'br_netfilter' Oct 13 05:50:56.094809 kernel: Bridge firewalling registered Oct 13 05:50:56.093551 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:50:56.098584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:50:56.100988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:50:56.113078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:50:56.119995 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:50:56.126715 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:50:56.169750 systemd-resolved[252]: Positive Trust Anchors: Oct 13 05:50:56.171494 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:50:56.171596 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:50:56.190686 systemd-resolved[252]: Defaulting to hostname 'linux'. Oct 13 05:50:56.191513 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:50:56.193718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:50:56.209896 kernel: SCSI subsystem initialized Oct 13 05:50:56.218920 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:50:56.227905 kernel: iscsi: registered transport (tcp) Oct 13 05:50:56.245113 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:50:56.245161 kernel: QLogic iSCSI HBA Driver Oct 13 05:50:56.258872 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:50:56.275271 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:50:56.276200 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:50:56.309117 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:50:56.312996 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:50:56.356899 kernel: raid6: avx512x4 gen() 42149 MB/s Oct 13 05:50:56.374892 kernel: raid6: avx512x2 gen() 41951 MB/s Oct 13 05:50:56.391890 kernel: raid6: avx512x1 gen() 25163 MB/s Oct 13 05:50:56.409890 kernel: raid6: avx2x4 gen() 36046 MB/s Oct 13 05:50:56.426891 kernel: raid6: avx2x2 gen() 37983 MB/s Oct 13 05:50:56.445039 kernel: raid6: avx2x1 gen() 30851 MB/s Oct 13 05:50:56.445055 kernel: raid6: using algorithm avx512x4 gen() 42149 MB/s Oct 13 05:50:56.464321 kernel: raid6: .... xor() 7713 MB/s, rmw enabled Oct 13 05:50:56.464344 kernel: raid6: using avx512x2 recovery algorithm Oct 13 05:50:56.481896 kernel: xor: automatically using best checksumming function avx Oct 13 05:50:56.603902 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:50:56.609349 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:50:56.613346 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:50:56.635157 systemd-udevd[454]: Using default interface naming scheme 'v255'. Oct 13 05:50:56.639474 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:50:56.647367 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:50:56.664234 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Oct 13 05:50:56.683197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:50:56.686003 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:50:56.724455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:50:56.732324 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:50:56.782927 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:50:56.796874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:50:56.799982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:50:56.807563 kernel: AES CTR mode by8 optimization enabled Oct 13 05:50:56.809345 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:50:56.815191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:50:56.822223 kernel: hv_vmbus: Vmbus version:5.3 Oct 13 05:50:56.835012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:50:56.835102 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:50:56.839642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:50:56.861319 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 13 05:50:56.861358 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 13 05:50:56.862348 kernel: hv_vmbus: registering driver hv_pci Oct 13 05:50:56.872904 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Oct 13 05:50:56.876905 kernel: PTP clock support registered Oct 13 05:50:56.879934 kernel: hv_vmbus: registering driver hv_storvsc Oct 13 05:50:56.888036 kernel: scsi host0: storvsc_host_t Oct 13 05:50:56.888224 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Oct 13 05:50:56.889623 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 13 05:50:56.895897 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Oct 13 05:50:56.896574 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Oct 13 05:50:56.897040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:50:56.907513 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 05:50:56.907544 kernel: hv_utils: Registering HyperV Utility Driver Oct 13 05:50:56.907560 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Oct 13 05:50:56.907571 kernel: hv_vmbus: registering driver hv_utils Oct 13 05:50:56.915688 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Oct 13 05:50:56.915860 kernel: hv_vmbus: registering driver hid_hyperv Oct 13 05:50:56.918919 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Oct 13 05:50:56.922008 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 13 05:50:56.922153 kernel: hv_utils: Shutdown IC version 3.2 Oct 13 05:50:56.925089 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Oct 13 05:50:56.925204 kernel: hv_utils: Heartbeat IC version 3.0 Oct 13 05:50:56.925224 kernel: hv_utils: TimeSync IC version 4.0 Oct 13 05:50:56.932897 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Oct 13 05:50:56.650607 systemd-resolved[252]: Clock change detected. Flushing caches. Oct 13 05:50:56.657690 kernel: hv_vmbus: registering driver hv_netvsc Oct 13 05:50:56.657705 systemd-journald[204]: Time jumped backwards, rotating. Oct 13 05:50:56.666109 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Oct 13 05:50:56.666320 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Oct 13 05:50:56.669755 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 13 05:50:56.669922 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:50:56.672195 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 13 05:50:56.678196 kernel: hv_netvsc f8615163-0000-1000-2000-002248409b6b (unnamed net_device) (uninitialized): VF slot 1 added Oct 13 05:50:56.697189 kernel: nvme nvme0: pci function c05b:00:00.0 Oct 13 05:50:56.697385 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Oct 13 05:50:56.702264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 05:50:56.723207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#170 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 05:50:56.858328 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 13 05:50:56.864187 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 13 05:50:57.110199 kernel: nvme nvme0: using unchecked data buffer Oct 13 05:50:57.258829 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Oct 13 05:50:57.305450 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Oct 13 05:50:57.308200 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Oct 13 05:50:57.315515 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:50:57.336878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Oct 13 05:50:57.347008 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Oct 13 05:50:57.347983 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:50:57.352456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:50:57.354304 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:50:57.359889 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:50:57.372293 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:50:57.384190 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 13 05:50:57.387700 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:50:57.393194 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 13 05:50:57.697285 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Oct 13 05:50:57.697510 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Oct 13 05:50:57.700364 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Oct 13 05:50:57.702070 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Oct 13 05:50:57.707343 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Oct 13 05:50:57.711236 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Oct 13 05:50:57.716634 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Oct 13 05:50:57.716659 kernel: pci 7870:00:00.0: enabling Extended Tags Oct 13 05:50:57.733387 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Oct 13 05:50:57.733606 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Oct 13 05:50:57.736682 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Oct 13 05:50:57.741677 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Oct 13 05:50:57.751188 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Oct 13 05:50:57.754518 kernel: hv_netvsc f8615163-0000-1000-2000-002248409b6b eth0: VF registering: eth1 Oct 13 05:50:57.754695 kernel: mana 7870:00:00.0 eth1: joined to eth0 Oct 13 05:50:57.759192 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Oct 13 05:50:58.399590 disk-uuid[670]: The operation has completed successfully. Oct 13 05:50:58.401878 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 13 05:50:58.457357 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:50:58.457452 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:50:58.493275 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 05:50:58.504364 sh[711]: Success Oct 13 05:50:58.530430 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:50:58.530494 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:50:58.530926 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:50:58.541185 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Oct 13 05:50:58.732946 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:50:58.737747 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 05:50:58.752468 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 05:50:58.768208 kernel: BTRFS: device fsid c8746500-26f5-4ec1-9da8-aef51ec7db92 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (724) Oct 13 05:50:58.775190 kernel: BTRFS info (device dm-0): first mount of filesystem c8746500-26f5-4ec1-9da8-aef51ec7db92 Oct 13 05:50:58.775223 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:50:59.029795 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 13 05:50:59.029909 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:50:59.031341 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:50:59.055390 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 05:50:59.058066 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:50:59.061053 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:50:59.061707 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:50:59.070422 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:50:59.089196 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (747) Oct 13 05:50:59.092189 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:50:59.095201 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:50:59.131738 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:50:59.131775 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:50:59.134198 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:50:59.141191 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:50:59.142275 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:50:59.148340 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:50:59.162388 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:50:59.166942 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:50:59.200592 systemd-networkd[893]: lo: Link UP Oct 13 05:50:59.200600 systemd-networkd[893]: lo: Gained carrier Oct 13 05:50:59.201600 systemd-networkd[893]: Enumeration completed Oct 13 05:50:59.207714 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Oct 13 05:50:59.201975 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:50:59.216573 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:50:59.216715 kernel: hv_netvsc f8615163-0000-1000-2000-002248409b6b eth0: Data path switched to VF: enP30832s1 Oct 13 05:50:59.201978 systemd-networkd[893]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:50:59.203253 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:50:59.209604 systemd[1]: Reached target network.target - Network. Oct 13 05:50:59.214423 systemd-networkd[893]: enP30832s1: Link UP Oct 13 05:50:59.214496 systemd-networkd[893]: eth0: Link UP Oct 13 05:50:59.214582 systemd-networkd[893]: eth0: Gained carrier Oct 13 05:50:59.214594 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:50:59.220317 systemd-networkd[893]: enP30832s1: Gained carrier Oct 13 05:50:59.231218 systemd-networkd[893]: eth0: DHCPv4 address 10.200.4.24/24, gateway 10.200.4.1 acquired from 168.63.129.16 Oct 13 05:50:59.914614 ignition[876]: Ignition 2.22.0 Oct 13 05:50:59.914626 ignition[876]: Stage: fetch-offline Oct 13 05:50:59.916423 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:50:59.914738 ignition[876]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:50:59.920327 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 05:50:59.914745 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:50:59.914839 ignition[876]: parsed url from cmdline: "" Oct 13 05:50:59.914841 ignition[876]: no config URL provided Oct 13 05:50:59.914847 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:50:59.914853 ignition[876]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:50:59.914857 ignition[876]: failed to fetch config: resource requires networking Oct 13 05:50:59.915181 ignition[876]: Ignition finished successfully Oct 13 05:50:59.948616 ignition[903]: Ignition 2.22.0 Oct 13 05:50:59.948626 ignition[903]: Stage: fetch Oct 13 05:50:59.948832 ignition[903]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:50:59.948839 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:50:59.948909 ignition[903]: parsed url from cmdline: "" Oct 13 05:50:59.948912 ignition[903]: no config URL provided Oct 13 05:50:59.948917 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:50:59.948923 ignition[903]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:50:59.948943 ignition[903]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 13 05:50:59.999390 ignition[903]: GET result: OK Oct 13 05:50:59.999489 ignition[903]: config has been read from IMDS userdata Oct 13 05:50:59.999533 ignition[903]: parsing config with SHA512: 4681160df9ea083df316d32c990adf7573c9e9b95efdf6d8ec550a3c22a4f948e20da1db869792cf44c7df65aafdb21ad00fdda01ebc86806ed1554535aed3ca Oct 13 05:51:00.006380 unknown[903]: fetched base config from "system" Oct 13 05:51:00.006389 unknown[903]: fetched base config from "system" Oct 13 05:51:00.006731 ignition[903]: fetch: fetch complete Oct 13 05:51:00.006394 unknown[903]: fetched user config from "azure" Oct 13 05:51:00.006736 ignition[903]: fetch: fetch passed Oct 13 05:51:00.009115 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 05:51:00.006777 ignition[903]: Ignition finished successfully Oct 13 05:51:00.013140 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:51:00.046128 ignition[910]: Ignition 2.22.0 Oct 13 05:51:00.046140 ignition[910]: Stage: kargs Oct 13 05:51:00.046375 ignition[910]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:51:00.049104 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:51:00.046384 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:51:00.053595 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:51:00.047222 ignition[910]: kargs: kargs passed Oct 13 05:51:00.047260 ignition[910]: Ignition finished successfully Oct 13 05:51:00.096042 ignition[917]: Ignition 2.22.0 Oct 13 05:51:00.096053 ignition[917]: Stage: disks Oct 13 05:51:00.098459 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:51:00.096301 ignition[917]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:51:00.096309 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:51:00.097160 ignition[917]: disks: disks passed Oct 13 05:51:00.097212 ignition[917]: Ignition finished successfully Oct 13 05:51:00.109443 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:51:00.111200 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:51:00.116219 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:51:00.116453 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:51:00.116473 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:51:00.122241 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:51:00.189351 systemd-fsck[925]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Oct 13 05:51:00.192984 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:51:00.198358 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:51:01.181396 systemd-networkd[893]: eth0: Gained IPv6LL Oct 13 05:51:01.841184 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8b520359-9763-45f3-b7f7-db1e9fbc640d r/w with ordered data mode. Quota mode: none. Oct 13 05:51:01.841573 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:51:01.845729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:51:01.872159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:51:01.889374 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:51:01.899182 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (934) Oct 13 05:51:01.902186 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:51:01.902224 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:51:01.902415 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 13 05:51:01.910183 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:51:01.910211 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:51:01.910222 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:51:01.914300 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:51:01.914336 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:51:01.923730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:51:01.926319 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:51:01.930817 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:51:02.406412 coreos-metadata[936]: Oct 13 05:51:02.406 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 05:51:02.418064 coreos-metadata[936]: Oct 13 05:51:02.417 INFO Fetch successful Oct 13 05:51:02.419446 coreos-metadata[936]: Oct 13 05:51:02.418 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 13 05:51:02.424823 coreos-metadata[936]: Oct 13 05:51:02.424 INFO Fetch successful Oct 13 05:51:02.449223 coreos-metadata[936]: Oct 13 05:51:02.449 INFO wrote hostname ci-4459.1.0-a-4938a72943 to /sysroot/etc/hostname Oct 13 05:51:02.451079 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:51:02.780773 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:51:02.807585 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:51:02.837415 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:51:02.852836 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:51:03.808683 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:51:03.811818 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:51:03.826397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:51:03.837478 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:51:03.840276 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:51:03.861903 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:51:03.869825 ignition[1053]: INFO : Ignition 2.22.0 Oct 13 05:51:03.869825 ignition[1053]: INFO : Stage: mount Oct 13 05:51:03.874810 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:51:03.874810 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:51:03.874810 ignition[1053]: INFO : mount: mount passed Oct 13 05:51:03.874810 ignition[1053]: INFO : Ignition finished successfully Oct 13 05:51:03.872329 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:51:03.874567 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:51:03.887045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:51:03.914186 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1065) Oct 13 05:51:03.914220 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:51:03.916221 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:51:03.921438 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 13 05:51:03.921546 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 13 05:51:03.922656 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 13 05:51:03.924428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:51:03.952094 ignition[1082]: INFO : Ignition 2.22.0 Oct 13 05:51:03.952094 ignition[1082]: INFO : Stage: files Oct 13 05:51:03.957225 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:51:03.957225 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:51:03.957225 ignition[1082]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:51:03.967259 ignition[1082]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:51:03.967259 ignition[1082]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:51:04.009210 ignition[1082]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:51:04.013235 ignition[1082]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:51:04.013235 ignition[1082]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:51:04.009588 unknown[1082]: wrote ssh authorized keys file for user: core Oct 13 05:51:04.054379 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:51:04.057561 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:51:04.098650 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:51:04.195002 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:51:04.200264 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:51:04.200264 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:51:04.200264 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:51:04.200264 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:51:04.200264 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:51:04.200264 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:51:04.200264 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:51:04.200264 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:51:04.230222 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:51:04.230222 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:51:04.230222 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:51:04.230222 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:51:04.230222 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:51:04.230222 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 13 05:51:04.453567 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:51:04.644334 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:51:04.644334 ignition[1082]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:51:04.675693 ignition[1082]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:51:04.687523 ignition[1082]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:51:04.687523 ignition[1082]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:51:04.687523 ignition[1082]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:51:04.687523 ignition[1082]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:51:04.703765 ignition[1082]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:51:04.703765 ignition[1082]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:51:04.703765 ignition[1082]: INFO : files: files passed Oct 13 05:51:04.703765 ignition[1082]: INFO : Ignition finished successfully Oct 13 05:51:04.691870 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:51:04.698770 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:51:04.711199 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:51:04.723537 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:51:04.740711 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:51:04.740711 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:51:04.723635 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:51:04.745186 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:51:04.729382 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:51:04.732714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:51:04.737275 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:51:04.793475 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:51:04.793578 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:51:04.798596 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:51:04.804585 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:51:04.807697 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:51:04.810012 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:51:04.825778 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:51:04.830430 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:51:04.846025 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:51:04.846612 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:51:04.847551 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:51:04.847926 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:51:04.848027 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:51:04.848653 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:51:04.848997 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:51:04.858340 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:51:04.863322 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:51:04.866538 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:51:04.871327 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:51:04.872120 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:51:04.872506 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:51:04.873138 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:51:04.873523 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:51:04.873901 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:51:04.874208 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:51:04.874331 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:51:04.885612 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:51:04.886019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:51:04.886573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:51:04.887251 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:51:04.899051 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:51:04.899161 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:51:04.923276 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:51:04.923883 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:51:04.927732 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:51:04.927862 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:51:04.935317 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 13 05:51:04.935458 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:51:04.941569 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:51:04.946475 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:51:04.949535 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:51:04.949728 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:51:04.949878 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:51:04.949961 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:51:04.956245 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:51:04.968480 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:51:04.988098 ignition[1135]: INFO : Ignition 2.22.0 Oct 13 05:51:04.988098 ignition[1135]: INFO : Stage: umount Oct 13 05:51:05.000393 ignition[1135]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:51:05.000393 ignition[1135]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 05:51:05.000393 ignition[1135]: INFO : umount: umount passed Oct 13 05:51:05.000393 ignition[1135]: INFO : Ignition finished successfully Oct 13 05:51:04.990584 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:51:04.990680 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:51:04.991548 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:51:04.991627 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:51:04.992426 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:51:04.992461 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:51:04.992589 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 05:51:04.992615 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 05:51:04.992649 systemd[1]: Stopped target network.target - Network. Oct 13 05:51:04.992891 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:51:04.992920 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:51:04.992957 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:51:04.993153 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:51:04.994416 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:51:05.004123 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:51:05.022240 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:51:05.024758 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:51:05.024799 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:51:05.029245 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:51:05.029272 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:51:05.032628 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:51:05.032679 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:51:05.036318 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:51:05.036366 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:51:05.038320 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:51:05.042289 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:51:05.044215 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:51:05.044728 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:51:05.044808 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:51:05.052755 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:51:05.052857 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:51:05.058089 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 05:51:05.058296 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:51:05.058383 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:51:05.061973 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 05:51:05.063318 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:51:05.064829 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:51:05.130245 kernel: hv_netvsc f8615163-0000-1000-2000-002248409b6b eth0: Data path switched from VF: enP30832s1 Oct 13 05:51:05.130422 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:51:05.064864 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:51:05.068354 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:51:05.068403 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:51:05.073522 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:51:05.080218 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:51:05.080268 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:51:05.084293 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:51:05.084341 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:51:05.089432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:51:05.089472 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:51:05.090009 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:51:05.090043 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:51:05.090656 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:51:05.103410 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 05:51:05.103469 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:51:05.109517 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:51:05.115358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:51:05.118551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:51:05.118613 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:51:05.122273 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:51:05.122308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:51:05.128063 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:51:05.128191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:51:05.136507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:51:05.136553 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:51:05.136818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:51:05.136853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:51:05.139293 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:51:05.139380 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:51:05.139427 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:51:05.147538 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:51:05.147579 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:51:05.158194 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:51:05.158239 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:51:05.165063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:51:05.165102 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:51:05.168236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:51:05.168274 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:51:05.173890 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 13 05:51:05.173941 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Oct 13 05:51:05.173971 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 13 05:51:05.174002 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:51:05.174336 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:51:05.174436 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:51:05.177393 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:51:05.177457 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:51:05.180761 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:51:05.187281 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:51:05.278891 systemd[1]: Switching root. Oct 13 05:51:05.391529 systemd-journald[204]: Journal stopped Oct 13 05:51:11.256888 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Oct 13 05:51:11.256923 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:51:11.256936 kernel: SELinux: policy capability open_perms=1 Oct 13 05:51:11.256946 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:51:11.256955 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:51:11.256964 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:51:11.256974 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:51:11.256985 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:51:11.256994 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:51:11.257005 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:51:11.257015 kernel: audit: type=1403 audit(1760334666.512:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:51:11.257026 systemd[1]: Successfully loaded SELinux policy in 170.006ms. Oct 13 05:51:11.257037 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.233ms. Oct 13 05:51:11.257049 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:51:11.257061 systemd[1]: Detected virtualization microsoft. Oct 13 05:51:11.257072 systemd[1]: Detected architecture x86-64. Oct 13 05:51:11.257081 systemd[1]: Detected first boot. Oct 13 05:51:11.257092 systemd[1]: Hostname set to . Oct 13 05:51:11.257104 systemd[1]: Initializing machine ID from random generator. Oct 13 05:51:11.257115 zram_generator::config[1177]: No configuration found. Oct 13 05:51:11.257126 kernel: Guest personality initialized and is inactive Oct 13 05:51:11.257135 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Oct 13 05:51:11.257145 kernel: Initialized host personality Oct 13 05:51:11.257154 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:51:11.257164 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:51:11.257759 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 05:51:11.257772 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:51:11.257782 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:51:11.257792 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:51:11.257802 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:51:11.257813 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:51:11.257822 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:51:11.257831 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:51:11.257842 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:51:11.257852 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:51:11.257861 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:51:11.257870 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:51:11.257879 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:51:11.257890 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:51:11.257901 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:51:11.257914 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:51:11.257926 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:51:11.257937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:51:11.257947 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:51:11.257956 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:51:11.257967 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:51:11.257977 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:51:11.257987 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:51:11.257999 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:51:11.258009 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:51:11.258019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:51:11.258029 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:51:11.258039 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:51:11.258050 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:51:11.258060 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:51:11.258072 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:51:11.258085 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:51:11.258097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:51:11.258108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:51:11.258121 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:51:11.258131 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:51:11.258143 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:51:11.258155 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:51:11.258165 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:51:11.258257 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:51:11.258268 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:51:11.258278 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:51:11.258287 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:51:11.258298 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:51:11.258309 systemd[1]: Reached target machines.target - Containers. Oct 13 05:51:11.258322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:51:11.258332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:51:11.258343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:51:11.258353 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:51:11.258363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:51:11.258373 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:51:11.258383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:51:11.258393 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:51:11.258406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:51:11.258434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:51:11.258444 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:51:11.258454 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:51:11.258464 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:51:11.258473 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:51:11.258484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:51:11.258494 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:51:11.258507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:51:11.258517 kernel: loop: module loaded Oct 13 05:51:11.258528 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:51:11.258562 systemd-journald[1270]: Collecting audit messages is disabled. Oct 13 05:51:11.258591 systemd-journald[1270]: Journal started Oct 13 05:51:11.258616 systemd-journald[1270]: Runtime Journal (/run/log/journal/612eaf1575434e60bfeb99d0b680d81f) is 8M, max 158.9M, 150.9M free. Oct 13 05:51:10.756317 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:51:10.764785 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 13 05:51:10.765207 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:51:11.273285 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:51:11.273340 kernel: fuse: init (API version 7.41) Oct 13 05:51:11.273356 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:51:11.284111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:51:11.286187 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 05:51:11.289255 systemd[1]: Stopped verity-setup.service. Oct 13 05:51:11.295192 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:51:11.301186 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:51:11.304274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:51:11.305962 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:51:11.309400 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:51:11.311975 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:51:11.315306 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:51:11.316805 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:51:11.319399 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:51:11.322488 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:51:11.324749 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:51:11.324899 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:51:11.326995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:51:11.327146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:51:11.329080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:51:11.329236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:51:11.330925 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:51:11.331069 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:51:11.334473 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:51:11.334630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:51:11.337371 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:51:11.340343 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:51:11.346326 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:51:11.360981 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:51:11.368305 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:51:11.375401 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:51:11.376195 kernel: ACPI: bus type drm_connector registered Oct 13 05:51:11.379159 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:51:11.379208 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:51:11.381665 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:51:11.388233 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:51:11.404301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:51:11.406060 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:51:11.418275 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:51:11.420672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:51:11.423567 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:51:11.427068 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:51:11.427905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:51:11.434321 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:51:11.439294 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:51:11.443819 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:51:11.445228 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:51:11.448254 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:51:11.453395 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:51:11.456286 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:51:11.463322 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:51:11.480405 systemd-journald[1270]: Time spent on flushing to /var/log/journal/612eaf1575434e60bfeb99d0b680d81f is 51.439ms for 996 entries. Oct 13 05:51:11.480405 systemd-journald[1270]: System Journal (/var/log/journal/612eaf1575434e60bfeb99d0b680d81f) is 11.9M, max 2.6G, 2.6G free. Oct 13 05:51:11.638294 systemd-journald[1270]: Received client request to flush runtime journal. Oct 13 05:51:11.638338 systemd-journald[1270]: /var/log/journal/612eaf1575434e60bfeb99d0b680d81f/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Oct 13 05:51:11.638368 kernel: loop0: detected capacity change from 0 to 128016 Oct 13 05:51:11.638387 systemd-journald[1270]: Rotating system journal. Oct 13 05:51:11.537704 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:51:11.540415 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:51:11.546346 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:51:11.567325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:51:11.608010 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Oct 13 05:51:11.608024 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Oct 13 05:51:11.611599 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:51:11.620100 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:51:11.639899 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:51:11.704613 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:51:11.766640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:51:11.965195 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:51:12.031199 kernel: loop1: detected capacity change from 0 to 110984 Oct 13 05:51:12.153625 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:51:12.273324 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:51:12.277322 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:51:12.296536 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Oct 13 05:51:12.296555 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Oct 13 05:51:12.299254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:51:12.302393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:51:12.328796 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Oct 13 05:51:12.471197 kernel: loop2: detected capacity change from 0 to 27936 Oct 13 05:51:12.770267 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:51:12.776316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:51:12.821322 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:51:12.850280 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:51:12.878196 kernel: loop3: detected capacity change from 0 to 229808 Oct 13 05:51:12.917187 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 05:51:12.945191 kernel: loop4: detected capacity change from 0 to 128016 Oct 13 05:51:12.962193 kernel: loop5: detected capacity change from 0 to 110984 Oct 13 05:51:12.978204 kernel: loop6: detected capacity change from 0 to 27936 Oct 13 05:51:12.987582 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:51:12.993854 kernel: hv_vmbus: registering driver hv_balloon Oct 13 05:51:12.994013 kernel: hv_vmbus: registering driver hyperv_fb Oct 13 05:51:13.000213 kernel: loop7: detected capacity change from 0 to 229808 Oct 13 05:51:13.006834 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 13 05:51:13.006883 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:51:13.033940 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 13 05:51:13.034001 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 13 05:51:13.037116 kernel: Console: switching to colour dummy device 80x25 Oct 13 05:51:13.037238 (sd-merge)[1392]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Oct 13 05:51:13.039958 (sd-merge)[1392]: Merged extensions into '/usr'. Oct 13 05:51:13.043961 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 05:51:13.051112 systemd[1]: Reload requested from client PID 1316 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:51:13.051251 systemd[1]: Reloading... Oct 13 05:51:13.167197 zram_generator::config[1444]: No configuration found. Oct 13 05:51:13.243377 systemd-networkd[1352]: lo: Link UP Oct 13 05:51:13.243389 systemd-networkd[1352]: lo: Gained carrier Oct 13 05:51:13.244641 systemd-networkd[1352]: Enumeration completed Oct 13 05:51:13.244965 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:51:13.244976 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:51:13.250207 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Oct 13 05:51:13.255209 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 13 05:51:13.256921 kernel: hv_netvsc f8615163-0000-1000-2000-002248409b6b eth0: Data path switched to VF: enP30832s1 Oct 13 05:51:13.259994 systemd-networkd[1352]: enP30832s1: Link UP Oct 13 05:51:13.260076 systemd-networkd[1352]: eth0: Link UP Oct 13 05:51:13.260080 systemd-networkd[1352]: eth0: Gained carrier Oct 13 05:51:13.260097 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:51:13.266383 systemd-networkd[1352]: enP30832s1: Gained carrier Oct 13 05:51:13.276267 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.4.24/24, gateway 10.200.4.1 acquired from 168.63.129.16 Oct 13 05:51:13.408190 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Oct 13 05:51:13.502825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Oct 13 05:51:13.504909 systemd[1]: Reloading finished in 452 ms. Oct 13 05:51:13.524093 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:51:13.527493 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:51:13.564997 systemd[1]: Starting ensure-sysext.service... Oct 13 05:51:13.569419 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:51:13.575496 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:51:13.580896 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:51:13.585117 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:51:13.591312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:51:13.613457 systemd[1]: Reload requested from client PID 1516 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:51:13.613528 systemd[1]: Reloading... Oct 13 05:51:13.634900 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:51:13.634925 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:51:13.636040 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:51:13.636472 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:51:13.638509 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:51:13.638835 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Oct 13 05:51:13.638926 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Oct 13 05:51:13.668249 zram_generator::config[1553]: No configuration found. Oct 13 05:51:13.690012 systemd-tmpfiles[1520]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:51:13.690122 systemd-tmpfiles[1520]: Skipping /boot Oct 13 05:51:13.696780 systemd-tmpfiles[1520]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:51:13.696790 systemd-tmpfiles[1520]: Skipping /boot Oct 13 05:51:13.865545 systemd[1]: Reloading finished in 251 ms. Oct 13 05:51:13.885952 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:51:13.888751 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:51:13.890236 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:51:13.898465 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:51:13.919308 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:51:13.924395 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:51:13.930906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:51:13.934324 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:51:13.939743 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:51:13.940310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:51:13.941747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:51:13.945266 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:51:13.949386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:51:13.951865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:51:13.951996 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:51:13.952103 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:51:13.954068 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:51:13.954453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:51:13.957918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:51:13.965419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:51:13.967886 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:51:13.968101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:51:13.974250 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:51:13.974422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:51:13.977376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:51:13.983391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:51:13.989555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:51:13.993530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:51:13.993658 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:51:13.993752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:51:13.996487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:51:13.996643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:51:14.006993 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:51:14.012830 systemd[1]: Finished ensure-sysext.service. Oct 13 05:51:14.018051 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:51:14.018225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:51:14.025916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:51:14.026132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:51:14.027981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:51:14.034331 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:51:14.036881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:51:14.036926 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:51:14.036987 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:51:14.039546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:51:14.039823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:51:14.039975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:51:14.044888 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:51:14.049773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:51:14.050223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:51:14.052818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:51:14.059906 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:51:14.060071 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:51:14.070561 systemd-resolved[1622]: Positive Trust Anchors: Oct 13 05:51:14.070578 systemd-resolved[1622]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:51:14.070610 systemd-resolved[1622]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:51:14.085328 systemd-resolved[1622]: Using system hostname 'ci-4459.1.0-a-4938a72943'. Oct 13 05:51:14.086415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:51:14.088273 systemd[1]: Reached target network.target - Network. Oct 13 05:51:14.088670 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:51:14.137928 augenrules[1662]: No rules Oct 13 05:51:14.138776 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:51:14.138974 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:51:14.169728 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:51:14.366325 systemd-networkd[1352]: eth0: Gained IPv6LL Oct 13 05:51:14.368435 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:51:14.368995 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:51:14.638705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:51:16.062251 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:51:16.066451 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:51:17.907068 ldconfig[1311]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:51:17.916466 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:51:17.919366 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:51:17.949705 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:51:17.951393 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:51:17.954387 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:51:17.956081 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:51:17.958048 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:51:17.960079 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:51:17.963323 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:51:17.966236 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:51:17.969222 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:51:17.969261 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:51:17.970495 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:51:18.003633 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:51:18.008296 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:51:18.012758 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:51:18.014657 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:51:18.019259 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:51:18.040648 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:51:18.043503 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:51:18.047832 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:51:18.051944 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:51:18.053419 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:51:18.056273 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:51:18.056303 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:51:18.070055 systemd[1]: Starting chronyd.service - NTP client/server... Oct 13 05:51:18.074668 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:51:18.080292 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 05:51:18.083646 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:51:18.089130 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:51:18.096092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:51:18.101359 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:51:18.103722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:51:18.107330 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:51:18.110315 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Oct 13 05:51:18.113336 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Oct 13 05:51:18.116013 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Oct 13 05:51:18.118315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:18.124824 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:51:18.129555 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:51:18.133689 jq[1684]: false Oct 13 05:51:18.138019 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:51:18.143487 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:51:18.150112 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:51:18.156691 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:51:18.161636 KVP[1690]: KVP starting; pid is:1690 Oct 13 05:51:18.162453 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:51:18.163207 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:51:18.168370 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:51:18.174203 kernel: hv_utils: KVP IC version 4.0 Oct 13 05:51:18.174256 extend-filesystems[1688]: Found /dev/nvme0n1p6 Oct 13 05:51:18.172062 KVP[1690]: KVP LIC Version: 3.1 Oct 13 05:51:18.176789 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:51:18.183283 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Refreshing passwd entry cache Oct 13 05:51:18.183100 oslogin_cache_refresh[1689]: Refreshing passwd entry cache Oct 13 05:51:18.182609 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:51:18.185367 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:51:18.185579 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:51:18.189079 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:51:18.189298 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:51:18.198188 extend-filesystems[1688]: Found /dev/nvme0n1p9 Oct 13 05:51:18.213290 extend-filesystems[1688]: Checking size of /dev/nvme0n1p9 Oct 13 05:51:18.218819 jq[1703]: true Oct 13 05:51:18.225554 oslogin_cache_refresh[1689]: Failure getting users, quitting Oct 13 05:51:18.227693 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Failure getting users, quitting Oct 13 05:51:18.227693 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:51:18.227693 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Refreshing group entry cache Oct 13 05:51:18.225572 oslogin_cache_refresh[1689]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:51:18.225614 oslogin_cache_refresh[1689]: Refreshing group entry cache Oct 13 05:51:18.230005 chronyd[1679]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Oct 13 05:51:18.231330 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:51:18.231545 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:51:18.235362 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Failure getting groups, quitting Oct 13 05:51:18.235414 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:51:18.235360 oslogin_cache_refresh[1689]: Failure getting groups, quitting Oct 13 05:51:18.235371 oslogin_cache_refresh[1689]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:51:18.238724 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:51:18.238964 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:51:18.241805 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:51:18.249097 jq[1729]: true Oct 13 05:51:18.268279 chronyd[1679]: Timezone right/UTC failed leap second check, ignoring Oct 13 05:51:18.268526 systemd[1]: Started chronyd.service - NTP client/server. Oct 13 05:51:18.268440 chronyd[1679]: Loaded seccomp filter (level 2) Oct 13 05:51:18.275314 extend-filesystems[1688]: Old size kept for /dev/nvme0n1p9 Oct 13 05:51:18.273961 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:51:18.274742 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:51:18.288849 update_engine[1701]: I20251013 05:51:18.287520 1701 main.cc:92] Flatcar Update Engine starting Oct 13 05:51:18.295055 tar[1709]: linux-amd64/LICENSE Oct 13 05:51:18.368332 tar[1709]: linux-amd64/helm Oct 13 05:51:18.404780 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:51:18.418531 bash[1756]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:51:18.421245 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:51:18.424547 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:51:18.431802 systemd-logind[1700]: New seat seat0. Oct 13 05:51:18.434920 systemd-logind[1700]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:51:18.435644 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:51:18.549096 dbus-daemon[1682]: [system] SELinux support is enabled Oct 13 05:51:18.550094 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:51:18.555511 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:51:18.555546 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:51:18.563337 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:51:18.563361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:51:18.568926 update_engine[1701]: I20251013 05:51:18.568194 1701 update_check_scheduler.cc:74] Next update check in 7m20s Oct 13 05:51:18.569284 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:51:18.573525 dbus-daemon[1682]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:51:18.576408 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:51:18.651660 coreos-metadata[1681]: Oct 13 05:51:18.651 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 05:51:18.654298 coreos-metadata[1681]: Oct 13 05:51:18.654 INFO Fetch successful Oct 13 05:51:18.654529 coreos-metadata[1681]: Oct 13 05:51:18.654 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 13 05:51:18.658255 coreos-metadata[1681]: Oct 13 05:51:18.658 INFO Fetch successful Oct 13 05:51:18.658658 coreos-metadata[1681]: Oct 13 05:51:18.658 INFO Fetching http://168.63.129.16/machine/9e764b7f-747d-441b-a35f-52b1c501d045/0f4a5587%2D122e%2D4275%2Dbd0d%2Db7d45c10d781.%5Fci%2D4459.1.0%2Da%2D4938a72943?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 13 05:51:18.660136 coreos-metadata[1681]: Oct 13 05:51:18.660 INFO Fetch successful Oct 13 05:51:18.663188 coreos-metadata[1681]: Oct 13 05:51:18.662 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 13 05:51:18.670205 coreos-metadata[1681]: Oct 13 05:51:18.670 INFO Fetch successful Oct 13 05:51:18.686306 sshd_keygen[1730]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:51:18.727212 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:51:18.736443 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:51:18.741672 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Oct 13 05:51:18.746380 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 05:51:18.758664 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:51:18.782825 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:51:18.783039 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:51:18.792629 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:51:18.813543 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Oct 13 05:51:18.835402 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:51:18.842255 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:51:18.845591 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:51:18.848870 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:51:18.869191 locksmithd[1785]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:51:19.032677 tar[1709]: linux-amd64/README.md Oct 13 05:51:19.052088 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:51:19.242025 containerd[1726]: time="2025-10-13T05:51:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:51:19.243566 containerd[1726]: time="2025-10-13T05:51:19.242835774Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:51:19.252254 containerd[1726]: time="2025-10-13T05:51:19.252217333Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.087µs" Oct 13 05:51:19.252254 containerd[1726]: time="2025-10-13T05:51:19.252244179Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:51:19.252368 containerd[1726]: time="2025-10-13T05:51:19.252262484Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:51:19.252399 containerd[1726]: time="2025-10-13T05:51:19.252390426Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:51:19.252420 containerd[1726]: time="2025-10-13T05:51:19.252404062Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:51:19.252442 containerd[1726]: time="2025-10-13T05:51:19.252426196Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252472815Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252485645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252700336Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252713984Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252725246Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252733454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252797530Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252954982Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252976671Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.252986129Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:51:19.253873 containerd[1726]: time="2025-10-13T05:51:19.253011926Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:51:19.254076 containerd[1726]: time="2025-10-13T05:51:19.253246557Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:51:19.254076 containerd[1726]: time="2025-10-13T05:51:19.253287509Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:51:19.265449 containerd[1726]: time="2025-10-13T05:51:19.265415371Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:51:19.265520 containerd[1726]: time="2025-10-13T05:51:19.265470814Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:51:19.265520 containerd[1726]: time="2025-10-13T05:51:19.265493400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:51:19.265520 containerd[1726]: time="2025-10-13T05:51:19.265507020Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:51:19.265595 containerd[1726]: time="2025-10-13T05:51:19.265519468Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:51:19.265595 containerd[1726]: time="2025-10-13T05:51:19.265531580Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:51:19.265595 containerd[1726]: time="2025-10-13T05:51:19.265548337Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:51:19.265595 containerd[1726]: time="2025-10-13T05:51:19.265561097Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:51:19.265595 containerd[1726]: time="2025-10-13T05:51:19.265573572Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:51:19.265595 containerd[1726]: time="2025-10-13T05:51:19.265589994Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:51:19.265713 containerd[1726]: time="2025-10-13T05:51:19.265600986Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:51:19.265713 containerd[1726]: time="2025-10-13T05:51:19.265614154Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:51:19.265752 containerd[1726]: time="2025-10-13T05:51:19.265722834Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:51:19.265752 containerd[1726]: time="2025-10-13T05:51:19.265740153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:51:19.265790 containerd[1726]: time="2025-10-13T05:51:19.265754844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:51:19.265790 containerd[1726]: time="2025-10-13T05:51:19.265766669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:51:19.265790 containerd[1726]: time="2025-10-13T05:51:19.265778073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:51:19.265847 containerd[1726]: time="2025-10-13T05:51:19.265788795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:51:19.265847 containerd[1726]: time="2025-10-13T05:51:19.265800389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:51:19.265847 containerd[1726]: time="2025-10-13T05:51:19.265811462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:51:19.265847 containerd[1726]: time="2025-10-13T05:51:19.265822846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:51:19.265847 containerd[1726]: time="2025-10-13T05:51:19.265833340Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:51:19.265847 containerd[1726]: time="2025-10-13T05:51:19.265843429Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:51:19.265967 containerd[1726]: time="2025-10-13T05:51:19.265908489Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:51:19.265967 containerd[1726]: time="2025-10-13T05:51:19.265925700Z" level=info msg="Start snapshots syncer" Oct 13 05:51:19.265967 containerd[1726]: time="2025-10-13T05:51:19.265948921Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:51:19.266688 containerd[1726]: time="2025-10-13T05:51:19.266637321Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:51:19.266833 containerd[1726]: time="2025-10-13T05:51:19.266716707Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:51:19.267020 containerd[1726]: time="2025-10-13T05:51:19.266991416Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:51:19.267146 containerd[1726]: time="2025-10-13T05:51:19.267126467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:51:19.267196 containerd[1726]: time="2025-10-13T05:51:19.267181205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:51:19.267219 containerd[1726]: time="2025-10-13T05:51:19.267202353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:51:19.267241 containerd[1726]: time="2025-10-13T05:51:19.267220320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:51:19.267261 containerd[1726]: time="2025-10-13T05:51:19.267248727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267262261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267279108Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267319199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267334866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267350400Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267399390Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267419730Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267433785Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267448374Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267457863Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267481760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267496546Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267514250Z" level=info msg="runtime interface created" Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267523567Z" level=info msg="created NRI interface" Oct 13 05:51:19.268973 containerd[1726]: time="2025-10-13T05:51:19.267532462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:51:19.269346 containerd[1726]: time="2025-10-13T05:51:19.267562675Z" level=info msg="Connect containerd service" Oct 13 05:51:19.269346 containerd[1726]: time="2025-10-13T05:51:19.267596179Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:51:19.270227 containerd[1726]: time="2025-10-13T05:51:19.269591318Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:51:19.528206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:19.532117 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:51:20.038226 containerd[1726]: time="2025-10-13T05:51:20.038184233Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:51:20.038363 containerd[1726]: time="2025-10-13T05:51:20.038246714Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:51:20.038363 containerd[1726]: time="2025-10-13T05:51:20.038263021Z" level=info msg="Start subscribing containerd event" Oct 13 05:51:20.038363 containerd[1726]: time="2025-10-13T05:51:20.038290845Z" level=info msg="Start recovering state" Oct 13 05:51:20.038427 containerd[1726]: time="2025-10-13T05:51:20.038382940Z" level=info msg="Start event monitor" Oct 13 05:51:20.038427 containerd[1726]: time="2025-10-13T05:51:20.038395137Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:51:20.038427 containerd[1726]: time="2025-10-13T05:51:20.038402668Z" level=info msg="Start streaming server" Oct 13 05:51:20.038427 containerd[1726]: time="2025-10-13T05:51:20.038414632Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:51:20.038427 containerd[1726]: time="2025-10-13T05:51:20.038422208Z" level=info msg="runtime interface starting up..." Oct 13 05:51:20.038530 containerd[1726]: time="2025-10-13T05:51:20.038428642Z" level=info msg="starting plugins..." Oct 13 05:51:20.038530 containerd[1726]: time="2025-10-13T05:51:20.038441429Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:51:20.038678 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:51:20.041154 containerd[1726]: time="2025-10-13T05:51:20.041120023Z" level=info msg="containerd successfully booted in 0.799577s" Oct 13 05:51:20.041511 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:51:20.046243 systemd[1]: Startup finished in 3.271s (kernel) + 10.906s (initrd) + 13.702s (userspace) = 27.880s. Oct 13 05:51:20.125588 kubelet[1841]: E1013 05:51:20.125534 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:51:20.127464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:51:20.127740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:51:20.128111 systemd[1]: kubelet.service: Consumed 962ms CPU time, 265.5M memory peak. Oct 13 05:51:20.562736 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:51:20.565797 login[1820]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:51:20.577709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:51:20.580732 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:51:20.587691 systemd-logind[1700]: New session 2 of user core. Oct 13 05:51:20.591134 systemd-logind[1700]: New session 1 of user core. Oct 13 05:51:20.612588 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:51:20.618407 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:51:20.633333 waagent[1815]: 2025-10-13T05:51:20.632537Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Oct 13 05:51:20.633333 waagent[1815]: 2025-10-13T05:51:20.632949Z INFO Daemon Daemon OS: flatcar 4459.1.0 Oct 13 05:51:20.633333 waagent[1815]: 2025-10-13T05:51:20.633032Z INFO Daemon Daemon Python: 3.11.13 Oct 13 05:51:20.633932 waagent[1815]: 2025-10-13T05:51:20.633896Z INFO Daemon Daemon Run daemon Oct 13 05:51:20.634212 waagent[1815]: 2025-10-13T05:51:20.634189Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.1.0' Oct 13 05:51:20.634511 waagent[1815]: 2025-10-13T05:51:20.634492Z INFO Daemon Daemon Using waagent for provisioning Oct 13 05:51:20.634951 waagent[1815]: 2025-10-13T05:51:20.634931Z INFO Daemon Daemon Activate resource disk Oct 13 05:51:20.635525 waagent[1815]: 2025-10-13T05:51:20.635506Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 13 05:51:20.637382 waagent[1815]: 2025-10-13T05:51:20.637350Z INFO Daemon Daemon Found device: None Oct 13 05:51:20.637579 waagent[1815]: 2025-10-13T05:51:20.637560Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 13 05:51:20.638496 waagent[1815]: 2025-10-13T05:51:20.638477Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 13 05:51:20.639055 waagent[1815]: 2025-10-13T05:51:20.639026Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 05:51:20.639381 waagent[1815]: 2025-10-13T05:51:20.639360Z INFO Daemon Daemon Running default provisioning handler Oct 13 05:51:20.644558 (systemd)[1862]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:51:20.650779 waagent[1815]: 2025-10-13T05:51:20.650416Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Oct 13 05:51:20.651120 waagent[1815]: 2025-10-13T05:51:20.651090Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 13 05:51:20.651388 waagent[1815]: 2025-10-13T05:51:20.651366Z INFO Daemon Daemon cloud-init is enabled: False Oct 13 05:51:20.651706 waagent[1815]: 2025-10-13T05:51:20.651688Z INFO Daemon Daemon Copying ovf-env.xml Oct 13 05:51:20.669744 systemd-logind[1700]: New session c1 of user core. Oct 13 05:51:20.772198 waagent[1815]: 2025-10-13T05:51:20.771311Z INFO Daemon Daemon Successfully mounted dvd Oct 13 05:51:20.794330 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 13 05:51:20.795019 waagent[1815]: 2025-10-13T05:51:20.794978Z INFO Daemon Daemon Detect protocol endpoint Oct 13 05:51:20.795709 waagent[1815]: 2025-10-13T05:51:20.795670Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 05:51:20.796009 waagent[1815]: 2025-10-13T05:51:20.795988Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 13 05:51:20.796251 waagent[1815]: 2025-10-13T05:51:20.796236Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 13 05:51:20.796618 waagent[1815]: 2025-10-13T05:51:20.796598Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 13 05:51:20.796805 waagent[1815]: 2025-10-13T05:51:20.796788Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 13 05:51:20.807089 waagent[1815]: 2025-10-13T05:51:20.807049Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 13 05:51:20.808834 waagent[1815]: 2025-10-13T05:51:20.807804Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 13 05:51:20.808834 waagent[1815]: 2025-10-13T05:51:20.808017Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 13 05:51:20.928303 waagent[1815]: 2025-10-13T05:51:20.927832Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 13 05:51:20.932799 waagent[1815]: 2025-10-13T05:51:20.930479Z INFO Daemon Daemon Forcing an update of the goal state. Oct 13 05:51:20.935792 waagent[1815]: 2025-10-13T05:51:20.935756Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 05:51:20.954627 systemd[1862]: Queued start job for default target default.target. Oct 13 05:51:20.955381 waagent[1815]: 2025-10-13T05:51:20.955353Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Oct 13 05:51:20.959682 waagent[1815]: 2025-10-13T05:51:20.956365Z INFO Daemon Oct 13 05:51:20.959682 waagent[1815]: 2025-10-13T05:51:20.956986Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 01ec6ba9-a80f-4a24-b38a-c017440b1c1a eTag: 7236404445775138233 source: Fabric] Oct 13 05:51:20.959682 waagent[1815]: 2025-10-13T05:51:20.957527Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Oct 13 05:51:20.959682 waagent[1815]: 2025-10-13T05:51:20.957913Z INFO Daemon Oct 13 05:51:20.959682 waagent[1815]: 2025-10-13T05:51:20.958139Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Oct 13 05:51:20.962713 systemd[1862]: Created slice app.slice - User Application Slice. Oct 13 05:51:20.962932 systemd[1862]: Reached target paths.target - Paths. Oct 13 05:51:20.962974 systemd[1862]: Reached target timers.target - Timers. Oct 13 05:51:20.966274 systemd[1862]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:51:20.969188 waagent[1815]: 2025-10-13T05:51:20.968228Z INFO Daemon Daemon Downloading artifacts profile blob Oct 13 05:51:20.977261 systemd[1862]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:51:20.977353 systemd[1862]: Reached target sockets.target - Sockets. Oct 13 05:51:20.977387 systemd[1862]: Reached target basic.target - Basic System. Oct 13 05:51:20.977460 systemd[1862]: Reached target default.target - Main User Target. Oct 13 05:51:20.977484 systemd[1862]: Startup finished in 302ms. Oct 13 05:51:20.977503 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:51:20.981297 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:51:20.982019 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:51:21.037412 waagent[1815]: 2025-10-13T05:51:21.036793Z INFO Daemon Downloaded certificate {'thumbprint': '08EE4DDBC483C784F04044607E080EE0240DD0E7', 'hasPrivateKey': True} Oct 13 05:51:21.037412 waagent[1815]: 2025-10-13T05:51:21.037378Z INFO Daemon Fetch goal state completed Oct 13 05:51:21.043789 waagent[1815]: 2025-10-13T05:51:21.043743Z INFO Daemon Daemon Starting provisioning Oct 13 05:51:21.044538 waagent[1815]: 2025-10-13T05:51:21.044285Z INFO Daemon Daemon Handle ovf-env.xml. Oct 13 05:51:21.046162 waagent[1815]: 2025-10-13T05:51:21.044548Z INFO Daemon Daemon Set hostname [ci-4459.1.0-a-4938a72943] Oct 13 05:51:21.081980 waagent[1815]: 2025-10-13T05:51:21.081939Z INFO Daemon Daemon Publish hostname [ci-4459.1.0-a-4938a72943] Oct 13 05:51:21.088823 waagent[1815]: 2025-10-13T05:51:21.082672Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 13 05:51:21.088823 waagent[1815]: 2025-10-13T05:51:21.083138Z INFO Daemon Daemon Primary interface is [eth0] Oct 13 05:51:21.091485 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:51:21.091493 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:51:21.091519 systemd-networkd[1352]: eth0: DHCP lease lost Oct 13 05:51:21.092328 waagent[1815]: 2025-10-13T05:51:21.092280Z INFO Daemon Daemon Create user account if not exists Oct 13 05:51:21.094337 waagent[1815]: 2025-10-13T05:51:21.093553Z INFO Daemon Daemon User core already exists, skip useradd Oct 13 05:51:21.094337 waagent[1815]: 2025-10-13T05:51:21.093793Z INFO Daemon Daemon Configure sudoer Oct 13 05:51:21.098632 waagent[1815]: 2025-10-13T05:51:21.098596Z INFO Daemon Daemon Configure sshd Oct 13 05:51:21.102229 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.4.24/24, gateway 10.200.4.1 acquired from 168.63.129.16 Oct 13 05:51:21.102594 waagent[1815]: 2025-10-13T05:51:21.102555Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Oct 13 05:51:21.108199 waagent[1815]: 2025-10-13T05:51:21.103088Z INFO Daemon Daemon Deploy ssh public key. Oct 13 05:51:22.208888 waagent[1815]: 2025-10-13T05:51:22.208836Z INFO Daemon Daemon Provisioning complete Oct 13 05:51:22.217445 waagent[1815]: 2025-10-13T05:51:22.217411Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 13 05:51:22.223082 waagent[1815]: 2025-10-13T05:51:22.218059Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 13 05:51:22.223082 waagent[1815]: 2025-10-13T05:51:22.218277Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Oct 13 05:51:22.321701 waagent[1910]: 2025-10-13T05:51:22.321631Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Oct 13 05:51:22.322005 waagent[1910]: 2025-10-13T05:51:22.321738Z INFO ExtHandler ExtHandler OS: flatcar 4459.1.0 Oct 13 05:51:22.322005 waagent[1910]: 2025-10-13T05:51:22.321777Z INFO ExtHandler ExtHandler Python: 3.11.13 Oct 13 05:51:22.322005 waagent[1910]: 2025-10-13T05:51:22.321815Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Oct 13 05:51:22.374146 waagent[1910]: 2025-10-13T05:51:22.374087Z INFO ExtHandler ExtHandler Distro: flatcar-4459.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Oct 13 05:51:22.374321 waagent[1910]: 2025-10-13T05:51:22.374277Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:51:22.374378 waagent[1910]: 2025-10-13T05:51:22.374351Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:51:22.378964 waagent[1910]: 2025-10-13T05:51:22.378909Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 05:51:22.385396 waagent[1910]: 2025-10-13T05:51:22.385363Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Oct 13 05:51:22.385720 waagent[1910]: 2025-10-13T05:51:22.385692Z INFO ExtHandler Oct 13 05:51:22.385764 waagent[1910]: 2025-10-13T05:51:22.385746Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6165c2a1-587f-4fc0-b7c9-6a262a698c98 eTag: 7236404445775138233 source: Fabric] Oct 13 05:51:22.385966 waagent[1910]: 2025-10-13T05:51:22.385943Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 13 05:51:22.386318 waagent[1910]: 2025-10-13T05:51:22.386289Z INFO ExtHandler Oct 13 05:51:22.386356 waagent[1910]: 2025-10-13T05:51:22.386333Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 13 05:51:22.390850 waagent[1910]: 2025-10-13T05:51:22.390818Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 13 05:51:22.446062 waagent[1910]: 2025-10-13T05:51:22.446009Z INFO ExtHandler Downloaded certificate {'thumbprint': '08EE4DDBC483C784F04044607E080EE0240DD0E7', 'hasPrivateKey': True} Oct 13 05:51:22.446456 waagent[1910]: 2025-10-13T05:51:22.446426Z INFO ExtHandler Fetch goal state completed Oct 13 05:51:22.456859 waagent[1910]: 2025-10-13T05:51:22.456812Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Oct 13 05:51:22.460933 waagent[1910]: 2025-10-13T05:51:22.460848Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1910 Oct 13 05:51:22.461006 waagent[1910]: 2025-10-13T05:51:22.460978Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Oct 13 05:51:22.461291 waagent[1910]: 2025-10-13T05:51:22.461267Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Oct 13 05:51:22.462353 waagent[1910]: 2025-10-13T05:51:22.462323Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 13 05:51:22.462659 waagent[1910]: 2025-10-13T05:51:22.462633Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Oct 13 05:51:22.462761 waagent[1910]: 2025-10-13T05:51:22.462740Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Oct 13 05:51:22.463133 waagent[1910]: 2025-10-13T05:51:22.463110Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 13 05:51:22.500500 waagent[1910]: 2025-10-13T05:51:22.500470Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 13 05:51:22.500642 waagent[1910]: 2025-10-13T05:51:22.500620Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 13 05:51:22.506094 waagent[1910]: 2025-10-13T05:51:22.505971Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 13 05:51:22.511145 systemd[1]: Reload requested from client PID 1925 ('systemctl') (unit waagent.service)... Oct 13 05:51:22.511192 systemd[1]: Reloading... Oct 13 05:51:22.589207 zram_generator::config[1964]: No configuration found. Oct 13 05:51:22.766096 systemd[1]: Reloading finished in 254 ms. Oct 13 05:51:22.790453 waagent[1910]: 2025-10-13T05:51:22.790386Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Oct 13 05:51:22.790641 waagent[1910]: 2025-10-13T05:51:22.790620Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Oct 13 05:51:22.819241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#181 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 13 05:51:23.115057 waagent[1910]: 2025-10-13T05:51:23.114934Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Oct 13 05:51:23.115326 waagent[1910]: 2025-10-13T05:51:23.115296Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Oct 13 05:51:23.115971 waagent[1910]: 2025-10-13T05:51:23.115937Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 13 05:51:23.116307 waagent[1910]: 2025-10-13T05:51:23.116280Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 13 05:51:23.116365 waagent[1910]: 2025-10-13T05:51:23.116334Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:51:23.116528 waagent[1910]: 2025-10-13T05:51:23.116508Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:51:23.116703 waagent[1910]: 2025-10-13T05:51:23.116678Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 13 05:51:23.116779 waagent[1910]: 2025-10-13T05:51:23.116750Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 05:51:23.116995 waagent[1910]: 2025-10-13T05:51:23.116972Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 05:51:23.117117 waagent[1910]: 2025-10-13T05:51:23.117092Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 13 05:51:23.117258 waagent[1910]: 2025-10-13T05:51:23.117232Z INFO EnvHandler ExtHandler Configure routes Oct 13 05:51:23.117312 waagent[1910]: 2025-10-13T05:51:23.117294Z INFO EnvHandler ExtHandler Gateway:None Oct 13 05:51:23.117354 waagent[1910]: 2025-10-13T05:51:23.117335Z INFO EnvHandler ExtHandler Routes:None Oct 13 05:51:23.117504 waagent[1910]: 2025-10-13T05:51:23.117485Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 13 05:51:23.117764 waagent[1910]: 2025-10-13T05:51:23.117705Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 13 05:51:23.118030 waagent[1910]: 2025-10-13T05:51:23.117996Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 13 05:51:23.118313 waagent[1910]: 2025-10-13T05:51:23.118261Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 13 05:51:23.119488 waagent[1910]: 2025-10-13T05:51:23.119457Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 13 05:51:23.119488 waagent[1910]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 13 05:51:23.119488 waagent[1910]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Oct 13 05:51:23.119488 waagent[1910]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 13 05:51:23.119488 waagent[1910]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:51:23.119488 waagent[1910]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:51:23.119488 waagent[1910]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 05:51:23.125620 waagent[1910]: 2025-10-13T05:51:23.125579Z INFO ExtHandler ExtHandler Oct 13 05:51:23.125684 waagent[1910]: 2025-10-13T05:51:23.125648Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 9e5004c2-5101-4172-9535-af6fcc43ab18 correlation 094834b4-bef0-46dc-8de5-c150d805ba73 created: 2025-10-13T05:50:18.974297Z] Oct 13 05:51:23.125981 waagent[1910]: 2025-10-13T05:51:23.125953Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 13 05:51:23.126540 waagent[1910]: 2025-10-13T05:51:23.126513Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Oct 13 05:51:23.182921 waagent[1910]: 2025-10-13T05:51:23.182874Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Oct 13 05:51:23.182921 waagent[1910]: Try `iptables -h' or 'iptables --help' for more information.) Oct 13 05:51:23.183838 waagent[1910]: 2025-10-13T05:51:23.183493Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5CEB979B-245C-47F3-AB61-C94242FC34BC;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Oct 13 05:51:23.227897 waagent[1910]: 2025-10-13T05:51:23.227843Z INFO MonitorHandler ExtHandler Network interfaces: Oct 13 05:51:23.227897 waagent[1910]: Executing ['ip', '-a', '-o', 'link']: Oct 13 05:51:23.227897 waagent[1910]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 13 05:51:23.227897 waagent[1910]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:40:9b:6b brd ff:ff:ff:ff:ff:ff\ alias Network Device Oct 13 05:51:23.227897 waagent[1910]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:40:9b:6b brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Oct 13 05:51:23.227897 waagent[1910]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 13 05:51:23.227897 waagent[1910]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 13 05:51:23.227897 waagent[1910]: 2: eth0 inet 10.200.4.24/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 13 05:51:23.227897 waagent[1910]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 13 05:51:23.227897 waagent[1910]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Oct 13 05:51:23.227897 waagent[1910]: 2: eth0 inet6 fe80::222:48ff:fe40:9b6b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 13 05:51:23.274291 waagent[1910]: 2025-10-13T05:51:23.274240Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Oct 13 05:51:23.274291 waagent[1910]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:51:23.274291 waagent[1910]: pkts bytes target prot opt in out source destination Oct 13 05:51:23.274291 waagent[1910]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:51:23.274291 waagent[1910]: pkts bytes target prot opt in out source destination Oct 13 05:51:23.274291 waagent[1910]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:51:23.274291 waagent[1910]: pkts bytes target prot opt in out source destination Oct 13 05:51:23.274291 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 05:51:23.274291 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 05:51:23.274291 waagent[1910]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 05:51:23.277257 waagent[1910]: 2025-10-13T05:51:23.277207Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 13 05:51:23.277257 waagent[1910]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:51:23.277257 waagent[1910]: pkts bytes target prot opt in out source destination Oct 13 05:51:23.277257 waagent[1910]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 05:51:23.277257 waagent[1910]: pkts bytes target prot opt in out source destination Oct 13 05:51:23.277257 waagent[1910]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Oct 13 05:51:23.277257 waagent[1910]: pkts bytes target prot opt in out source destination Oct 13 05:51:23.277257 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 05:51:23.277257 waagent[1910]: 4 595 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 05:51:23.277257 waagent[1910]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 05:51:30.193531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:51:30.194907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:30.715120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:30.718355 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:51:30.756293 kubelet[2062]: E1013 05:51:30.756256 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:51:30.759497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:51:30.759627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:51:30.759996 systemd[1]: kubelet.service: Consumed 139ms CPU time, 108.9M memory peak. Oct 13 05:51:35.160750 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:51:35.161823 systemd[1]: Started sshd@0-10.200.4.24:22-10.200.16.10:36588.service - OpenSSH per-connection server daemon (10.200.16.10:36588). Oct 13 05:51:35.903361 sshd[2070]: Accepted publickey for core from 10.200.16.10 port 36588 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:51:35.904486 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:35.908901 systemd-logind[1700]: New session 3 of user core. Oct 13 05:51:35.914324 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:51:36.430993 systemd[1]: Started sshd@1-10.200.4.24:22-10.200.16.10:36594.service - OpenSSH per-connection server daemon (10.200.16.10:36594). Oct 13 05:51:37.032023 sshd[2076]: Accepted publickey for core from 10.200.16.10 port 36594 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:51:37.033185 sshd-session[2076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:37.037414 systemd-logind[1700]: New session 4 of user core. Oct 13 05:51:37.041323 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:51:37.461653 sshd[2079]: Connection closed by 10.200.16.10 port 36594 Oct 13 05:51:37.462199 sshd-session[2076]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:37.465291 systemd[1]: sshd@1-10.200.4.24:22-10.200.16.10:36594.service: Deactivated successfully. Oct 13 05:51:37.466788 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:51:37.467488 systemd-logind[1700]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:51:37.468585 systemd-logind[1700]: Removed session 4. Oct 13 05:51:37.566897 systemd[1]: Started sshd@2-10.200.4.24:22-10.200.16.10:36606.service - OpenSSH per-connection server daemon (10.200.16.10:36606). Oct 13 05:51:38.168295 sshd[2085]: Accepted publickey for core from 10.200.16.10 port 36606 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:51:38.169441 sshd-session[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:38.173241 systemd-logind[1700]: New session 5 of user core. Oct 13 05:51:38.183292 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:51:38.590012 sshd[2088]: Connection closed by 10.200.16.10 port 36606 Oct 13 05:51:38.590575 sshd-session[2085]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:38.593880 systemd[1]: sshd@2-10.200.4.24:22-10.200.16.10:36606.service: Deactivated successfully. Oct 13 05:51:38.595433 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:51:38.596115 systemd-logind[1700]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:51:38.597294 systemd-logind[1700]: Removed session 5. Oct 13 05:51:38.699686 systemd[1]: Started sshd@3-10.200.4.24:22-10.200.16.10:36608.service - OpenSSH per-connection server daemon (10.200.16.10:36608). Oct 13 05:51:39.298568 sshd[2094]: Accepted publickey for core from 10.200.16.10 port 36608 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:51:39.299678 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:39.303225 systemd-logind[1700]: New session 6 of user core. Oct 13 05:51:39.312296 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:51:39.722046 sshd[2097]: Connection closed by 10.200.16.10 port 36608 Oct 13 05:51:39.722550 sshd-session[2094]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:39.725245 systemd[1]: sshd@3-10.200.4.24:22-10.200.16.10:36608.service: Deactivated successfully. Oct 13 05:51:39.726794 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:51:39.728563 systemd-logind[1700]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:51:39.729352 systemd-logind[1700]: Removed session 6. Oct 13 05:51:39.832777 systemd[1]: Started sshd@4-10.200.4.24:22-10.200.16.10:36624.service - OpenSSH per-connection server daemon (10.200.16.10:36624). Oct 13 05:51:40.435111 sshd[2103]: Accepted publickey for core from 10.200.16.10 port 36624 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:51:40.436234 sshd-session[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:40.440776 systemd-logind[1700]: New session 7 of user core. Oct 13 05:51:40.451330 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:51:40.943522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:51:40.944916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:41.333634 sudo[2107]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:51:41.333875 sudo[2107]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:51:41.385026 sudo[2107]: pam_unix(sudo:session): session closed for user root Oct 13 05:51:41.490332 sshd[2106]: Connection closed by 10.200.16.10 port 36624 Oct 13 05:51:41.491039 sshd-session[2103]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:41.494355 systemd[1]: sshd@4-10.200.4.24:22-10.200.16.10:36624.service: Deactivated successfully. Oct 13 05:51:41.495886 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:51:41.497086 systemd-logind[1700]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:51:41.498491 systemd-logind[1700]: Removed session 7. Oct 13 05:51:41.605958 systemd[1]: Started sshd@5-10.200.4.24:22-10.200.16.10:41580.service - OpenSSH per-connection server daemon (10.200.16.10:41580). Oct 13 05:51:42.052476 chronyd[1679]: Selected source PHC0 Oct 13 05:51:42.212838 sshd[2116]: Accepted publickey for core from 10.200.16.10 port 41580 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:51:42.213962 sshd-session[2116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:42.218222 systemd-logind[1700]: New session 8 of user core. Oct 13 05:51:42.224311 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:51:42.544273 sudo[2121]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:51:42.544502 sudo[2121]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:51:44.229340 sudo[2121]: pam_unix(sudo:session): session closed for user root Oct 13 05:51:44.233874 sudo[2120]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:51:44.234104 sudo[2120]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:51:44.242501 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:51:44.862365 augenrules[2143]: No rules Oct 13 05:51:44.864229 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:51:44.864612 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:51:44.867257 sudo[2120]: pam_unix(sudo:session): session closed for user root Oct 13 05:51:44.980653 sshd[2119]: Connection closed by 10.200.16.10 port 41580 Oct 13 05:51:44.981196 sshd-session[2116]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:44.984240 systemd[1]: sshd@5-10.200.4.24:22-10.200.16.10:41580.service: Deactivated successfully. Oct 13 05:51:44.986313 systemd-logind[1700]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:51:44.986415 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:51:44.987882 systemd-logind[1700]: Removed session 8. Oct 13 05:51:45.086787 systemd[1]: Started sshd@6-10.200.4.24:22-10.200.16.10:41590.service - OpenSSH per-connection server daemon (10.200.16.10:41590). Oct 13 05:51:45.691254 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 41590 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:51:45.692375 sshd-session[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:45.696916 systemd-logind[1700]: New session 9 of user core. Oct 13 05:51:45.702337 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:51:46.022533 sudo[2156]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:51:46.022753 sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:51:47.882423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:47.889418 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:51:47.926654 kubelet[2170]: E1013 05:51:47.926621 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:51:47.930463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:51:47.930587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:51:47.930936 systemd[1]: kubelet.service: Consumed 133ms CPU time, 111.2M memory peak. Oct 13 05:51:49.941267 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:51:49.957465 (dockerd)[2186]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:51:52.433433 dockerd[2186]: time="2025-10-13T05:51:52.433243283Z" level=info msg="Starting up" Oct 13 05:51:52.436129 dockerd[2186]: time="2025-10-13T05:51:52.436096044Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:51:52.445574 dockerd[2186]: time="2025-10-13T05:51:52.445536764Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:51:53.672935 dockerd[2186]: time="2025-10-13T05:51:53.672891383Z" level=info msg="Loading containers: start." Oct 13 05:51:53.723196 kernel: Initializing XFRM netlink socket Oct 13 05:51:54.110108 systemd-networkd[1352]: docker0: Link UP Oct 13 05:51:54.124186 dockerd[2186]: time="2025-10-13T05:51:54.124131783Z" level=info msg="Loading containers: done." Oct 13 05:51:54.204143 dockerd[2186]: time="2025-10-13T05:51:54.204085940Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:51:54.204357 dockerd[2186]: time="2025-10-13T05:51:54.204202939Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:51:54.204357 dockerd[2186]: time="2025-10-13T05:51:54.204287528Z" level=info msg="Initializing buildkit" Oct 13 05:51:54.249808 dockerd[2186]: time="2025-10-13T05:51:54.249753421Z" level=info msg="Completed buildkit initialization" Oct 13 05:51:54.255899 dockerd[2186]: time="2025-10-13T05:51:54.255866565Z" level=info msg="Daemon has completed initialization" Oct 13 05:51:54.256227 dockerd[2186]: time="2025-10-13T05:51:54.256001562Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:51:54.256114 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:51:55.263313 containerd[1726]: time="2025-10-13T05:51:55.263255958Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 13 05:51:56.141064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974036180.mount: Deactivated successfully. Oct 13 05:51:57.943505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 05:51:57.945251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:52:01.124451 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Oct 13 05:52:03.256302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:52:03.265425 (kubelet)[2439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:52:03.297160 kubelet[2439]: E1013 05:52:03.297124 2439 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:52:03.298748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:52:03.298882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:52:03.299243 systemd[1]: kubelet.service: Consumed 124ms CPU time, 110.2M memory peak. Oct 13 05:52:04.176752 update_engine[1701]: I20251013 05:52:04.176667 1701 update_attempter.cc:509] Updating boot flags... Oct 13 05:52:08.426553 containerd[1726]: time="2025-10-13T05:52:08.426475467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:08.429202 containerd[1726]: time="2025-10-13T05:52:08.429011352Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Oct 13 05:52:08.477228 containerd[1726]: time="2025-10-13T05:52:08.477154117Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:08.482145 containerd[1726]: time="2025-10-13T05:52:08.481310305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:08.482145 containerd[1726]: time="2025-10-13T05:52:08.481925864Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 13.218614846s" Oct 13 05:52:08.482145 containerd[1726]: time="2025-10-13T05:52:08.481956002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 13 05:52:08.482691 containerd[1726]: time="2025-10-13T05:52:08.482661652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 13 05:52:13.353661 containerd[1726]: time="2025-10-13T05:52:13.353613719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:13.358969 containerd[1726]: time="2025-10-13T05:52:13.358926688Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Oct 13 05:52:13.364189 containerd[1726]: time="2025-10-13T05:52:13.363429911Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:13.368105 containerd[1726]: time="2025-10-13T05:52:13.368061832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:13.368818 containerd[1726]: time="2025-10-13T05:52:13.368701725Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 4.886011388s" Oct 13 05:52:13.368818 containerd[1726]: time="2025-10-13T05:52:13.368731266Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 13 05:52:13.369392 containerd[1726]: time="2025-10-13T05:52:13.369372309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 13 05:52:13.443441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 13 05:52:13.444848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:52:14.014612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:52:14.017746 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:52:14.055205 kubelet[2523]: E1013 05:52:14.055150 2523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:52:14.056781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:52:14.056916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:52:14.057268 systemd[1]: kubelet.service: Consumed 127ms CPU time, 107.7M memory peak. Oct 13 05:52:15.095716 containerd[1726]: time="2025-10-13T05:52:15.095667754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:15.100712 containerd[1726]: time="2025-10-13T05:52:15.100670990Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Oct 13 05:52:15.104193 containerd[1726]: time="2025-10-13T05:52:15.103426088Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:15.115974 containerd[1726]: time="2025-10-13T05:52:15.115927880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:15.116746 containerd[1726]: time="2025-10-13T05:52:15.116631809Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.74723466s" Oct 13 05:52:15.116746 containerd[1726]: time="2025-10-13T05:52:15.116662727Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 13 05:52:15.117193 containerd[1726]: time="2025-10-13T05:52:15.117161230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 13 05:52:16.250932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135052731.mount: Deactivated successfully. Oct 13 05:52:16.633665 containerd[1726]: time="2025-10-13T05:52:16.633430395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:16.634690 containerd[1726]: time="2025-10-13T05:52:16.634660775Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Oct 13 05:52:16.638687 containerd[1726]: time="2025-10-13T05:52:16.638636473Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:16.644159 containerd[1726]: time="2025-10-13T05:52:16.644118869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:16.644659 containerd[1726]: time="2025-10-13T05:52:16.644449628Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.527245461s" Oct 13 05:52:16.644659 containerd[1726]: time="2025-10-13T05:52:16.644480812Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 13 05:52:16.644916 containerd[1726]: time="2025-10-13T05:52:16.644900850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 13 05:52:17.285105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506771190.mount: Deactivated successfully. Oct 13 05:52:18.362895 containerd[1726]: time="2025-10-13T05:52:18.362847071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:18.365353 containerd[1726]: time="2025-10-13T05:52:18.365185508Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Oct 13 05:52:18.367774 containerd[1726]: time="2025-10-13T05:52:18.367737505Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:18.372152 containerd[1726]: time="2025-10-13T05:52:18.372123759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:18.373080 containerd[1726]: time="2025-10-13T05:52:18.372728138Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.727757442s" Oct 13 05:52:18.373080 containerd[1726]: time="2025-10-13T05:52:18.372761335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 13 05:52:18.373465 containerd[1726]: time="2025-10-13T05:52:18.373444135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 05:52:18.934647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850877566.mount: Deactivated successfully. Oct 13 05:52:18.954583 containerd[1726]: time="2025-10-13T05:52:18.954540935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:52:18.956977 containerd[1726]: time="2025-10-13T05:52:18.956944720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 13 05:52:18.959891 containerd[1726]: time="2025-10-13T05:52:18.959844046Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:52:18.963468 containerd[1726]: time="2025-10-13T05:52:18.963424705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:52:18.964007 containerd[1726]: time="2025-10-13T05:52:18.963862960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 590.390578ms" Oct 13 05:52:18.964007 containerd[1726]: time="2025-10-13T05:52:18.963886620Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 05:52:18.964443 containerd[1726]: time="2025-10-13T05:52:18.964417308Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 13 05:52:19.600114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22259529.mount: Deactivated successfully. Oct 13 05:52:22.113643 containerd[1726]: time="2025-10-13T05:52:22.113590468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:22.178715 containerd[1726]: time="2025-10-13T05:52:22.178664808Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Oct 13 05:52:22.184921 containerd[1726]: time="2025-10-13T05:52:22.184875908Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:22.226389 containerd[1726]: time="2025-10-13T05:52:22.226315452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:22.227571 containerd[1726]: time="2025-10-13T05:52:22.227249983Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.262804357s" Oct 13 05:52:22.227571 containerd[1726]: time="2025-10-13T05:52:22.227282084Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 13 05:52:24.193581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 13 05:52:24.197375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:52:25.280463 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:52:25.280566 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:52:25.281047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:52:25.283497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:52:25.312795 systemd[1]: Reload requested from client PID 2687 ('systemctl') (unit session-9.scope)... Oct 13 05:52:25.312815 systemd[1]: Reloading... Oct 13 05:52:25.396196 zram_generator::config[2741]: No configuration found. Oct 13 05:52:26.363151 systemd[1]: Reloading finished in 1050 ms. Oct 13 05:52:27.179038 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:52:27.179139 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:52:27.179507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:52:27.181434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:52:32.778332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:52:32.786987 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:52:32.824336 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:52:32.824336 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:52:32.824336 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:52:32.824596 kubelet[2801]: I1013 05:52:32.824400 2801 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:52:33.502720 kubelet[2801]: I1013 05:52:33.502685 2801 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:52:33.502720 kubelet[2801]: I1013 05:52:33.502711 2801 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:52:33.502966 kubelet[2801]: I1013 05:52:33.502953 2801 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:52:33.529199 kubelet[2801]: E1013 05:52:33.529012 2801 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:52:33.530243 kubelet[2801]: I1013 05:52:33.530127 2801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:52:33.539090 kubelet[2801]: I1013 05:52:33.539055 2801 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:52:33.543299 kubelet[2801]: I1013 05:52:33.543276 2801 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:52:33.543484 kubelet[2801]: I1013 05:52:33.543460 2801 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:52:33.543629 kubelet[2801]: I1013 05:52:33.543480 2801 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-a-4938a72943","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:52:33.543747 kubelet[2801]: I1013 05:52:33.543629 2801 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:52:33.543747 kubelet[2801]: I1013 05:52:33.543639 2801 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:52:33.543747 kubelet[2801]: I1013 05:52:33.543743 2801 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:52:33.545970 kubelet[2801]: I1013 05:52:33.545954 2801 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:52:33.545970 kubelet[2801]: I1013 05:52:33.545971 2801 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:52:33.547120 kubelet[2801]: I1013 05:52:33.547097 2801 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:52:33.547182 kubelet[2801]: I1013 05:52:33.547123 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:52:33.553627 kubelet[2801]: E1013 05:52:33.553096 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-4938a72943&limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:52:33.553627 kubelet[2801]: I1013 05:52:33.553214 2801 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:52:33.553627 kubelet[2801]: I1013 05:52:33.553587 2801 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:52:33.554771 kubelet[2801]: W1013 05:52:33.554761 2801 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:52:33.557370 kubelet[2801]: I1013 05:52:33.557318 2801 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:52:33.557370 kubelet[2801]: I1013 05:52:33.557367 2801 server.go:1289] "Started kubelet" Oct 13 05:52:33.560463 kubelet[2801]: E1013 05:52:33.560436 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:52:33.560643 kubelet[2801]: I1013 05:52:33.560613 2801 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:52:33.562011 kubelet[2801]: I1013 05:52:33.561496 2801 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:52:33.562721 kubelet[2801]: I1013 05:52:33.562161 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:52:33.562721 kubelet[2801]: I1013 05:52:33.562480 2801 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:52:33.565509 kubelet[2801]: E1013 05:52:33.562588 2801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-a-4938a72943.186df721b78f5a52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-a-4938a72943,UID:ci-4459.1.0-a-4938a72943,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-a-4938a72943,},FirstTimestamp:2025-10-13 05:52:33.557338706 +0000 UTC m=+0.766913393,LastTimestamp:2025-10-13 05:52:33.557338706 +0000 UTC m=+0.766913393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-a-4938a72943,}" Oct 13 05:52:33.565509 kubelet[2801]: I1013 05:52:33.565319 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:52:33.567207 kubelet[2801]: I1013 05:52:33.566539 2801 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:52:33.569123 kubelet[2801]: E1013 05:52:33.568860 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:33.569123 kubelet[2801]: I1013 05:52:33.568890 2801 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:52:33.569123 kubelet[2801]: I1013 05:52:33.569043 2801 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:52:33.569123 kubelet[2801]: I1013 05:52:33.569089 2801 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:52:33.569730 kubelet[2801]: E1013 05:52:33.569701 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:52:33.571075 kubelet[2801]: E1013 05:52:33.571029 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-4938a72943?timeout=10s\": dial tcp 10.200.4.24:6443: connect: connection refused" interval="200ms" Oct 13 05:52:33.571151 kubelet[2801]: E1013 05:52:33.571120 2801 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:52:33.571310 kubelet[2801]: I1013 05:52:33.571294 2801 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:52:33.571362 kubelet[2801]: I1013 05:52:33.571349 2801 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:52:33.573457 kubelet[2801]: I1013 05:52:33.573098 2801 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:52:33.583401 kubelet[2801]: E1013 05:52:33.583294 2801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-a-4938a72943.186df721b78f5a52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-a-4938a72943,UID:ci-4459.1.0-a-4938a72943,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-a-4938a72943,},FirstTimestamp:2025-10-13 05:52:33.557338706 +0000 UTC m=+0.766913393,LastTimestamp:2025-10-13 05:52:33.557338706 +0000 UTC m=+0.766913393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-a-4938a72943,}" Oct 13 05:52:33.589416 kubelet[2801]: I1013 05:52:33.589400 2801 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:52:33.589416 kubelet[2801]: I1013 05:52:33.589411 2801 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:52:33.589519 kubelet[2801]: I1013 05:52:33.589474 2801 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:52:33.669720 kubelet[2801]: E1013 05:52:33.669686 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:33.712503 kubelet[2801]: I1013 05:52:33.712458 2801 policy_none.go:49] "None policy: Start" Oct 13 05:52:33.712503 kubelet[2801]: I1013 05:52:33.712510 2801 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:52:33.712665 kubelet[2801]: I1013 05:52:33.712525 2801 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:52:33.770276 kubelet[2801]: E1013 05:52:33.770133 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:33.771608 kubelet[2801]: E1013 05:52:33.771577 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-4938a72943?timeout=10s\": dial tcp 10.200.4.24:6443: connect: connection refused" interval="400ms" Oct 13 05:52:33.870365 kubelet[2801]: E1013 05:52:33.870306 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:33.970756 kubelet[2801]: E1013 05:52:33.970695 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:34.071797 kubelet[2801]: E1013 05:52:34.071662 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:34.172639 kubelet[2801]: E1013 05:52:34.172208 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:34.172639 kubelet[2801]: E1013 05:52:34.172602 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-4938a72943?timeout=10s\": dial tcp 10.200.4.24:6443: connect: connection refused" interval="800ms" Oct 13 05:52:34.176111 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:52:34.185198 kubelet[2801]: I1013 05:52:34.184956 2801 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:52:34.187556 kubelet[2801]: I1013 05:52:34.187296 2801 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:52:34.187556 kubelet[2801]: I1013 05:52:34.187318 2801 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:52:34.187556 kubelet[2801]: I1013 05:52:34.187339 2801 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:52:34.187556 kubelet[2801]: I1013 05:52:34.187347 2801 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:52:34.187556 kubelet[2801]: E1013 05:52:34.187380 2801 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:52:34.187899 kubelet[2801]: E1013 05:52:34.187873 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:52:34.189495 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:52:34.194962 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:52:34.201844 kubelet[2801]: E1013 05:52:34.201790 2801 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:52:34.201974 kubelet[2801]: I1013 05:52:34.201963 2801 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:52:34.202098 kubelet[2801]: I1013 05:52:34.201978 2801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:52:34.202293 kubelet[2801]: I1013 05:52:34.202187 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:52:34.203508 kubelet[2801]: E1013 05:52:34.203480 2801 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:52:34.203588 kubelet[2801]: E1013 05:52:34.203533 2801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:34.304266 kubelet[2801]: I1013 05:52:34.304224 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.304614 kubelet[2801]: E1013 05:52:34.304594 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.24:6443/api/v1/nodes\": dial tcp 10.200.4.24:6443: connect: connection refused" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.373297 kubelet[2801]: I1013 05:52:34.373178 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/584781b50b03596067a542804e04a5ad-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-a-4938a72943\" (UID: \"584781b50b03596067a542804e04a5ad\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.373297 kubelet[2801]: I1013 05:52:34.373218 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/584781b50b03596067a542804e04a5ad-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-a-4938a72943\" (UID: \"584781b50b03596067a542804e04a5ad\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.373297 kubelet[2801]: I1013 05:52:34.373236 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/584781b50b03596067a542804e04a5ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-a-4938a72943\" (UID: \"584781b50b03596067a542804e04a5ad\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.382413 systemd[1]: Created slice kubepods-burstable-pod584781b50b03596067a542804e04a5ad.slice - libcontainer container kubepods-burstable-pod584781b50b03596067a542804e04a5ad.slice. Oct 13 05:52:34.390707 kubelet[2801]: E1013 05:52:34.390669 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.473472 kubelet[2801]: I1013 05:52:34.473435 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.473612 kubelet[2801]: I1013 05:52:34.473584 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.473612 kubelet[2801]: I1013 05:52:34.473604 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.473685 kubelet[2801]: I1013 05:52:34.473623 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.473723 kubelet[2801]: I1013 05:52:34.473689 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.477336 systemd[1]: Created slice kubepods-burstable-pod0bb80dcfa6fff60c3478bd44e978fd9a.slice - libcontainer container kubepods-burstable-pod0bb80dcfa6fff60c3478bd44e978fd9a.slice. Oct 13 05:52:34.478890 kubelet[2801]: E1013 05:52:34.478856 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.506314 kubelet[2801]: I1013 05:52:34.506296 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.506615 kubelet[2801]: E1013 05:52:34.506596 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.24:6443/api/v1/nodes\": dial tcp 10.200.4.24:6443: connect: connection refused" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.573824 kubelet[2801]: I1013 05:52:34.573786 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dcdfc224828253f066c0f50b3cf71fcc-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-a-4938a72943\" (UID: \"dcdfc224828253f066c0f50b3cf71fcc\") " pod="kube-system/kube-scheduler-ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.692315 containerd[1726]: time="2025-10-13T05:52:34.692198308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-a-4938a72943,Uid:584781b50b03596067a542804e04a5ad,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:34.740494 systemd[1]: Created slice kubepods-burstable-poddcdfc224828253f066c0f50b3cf71fcc.slice - libcontainer container kubepods-burstable-poddcdfc224828253f066c0f50b3cf71fcc.slice. Oct 13 05:52:34.741909 kubelet[2801]: E1013 05:52:34.741874 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.742577 containerd[1726]: time="2025-10-13T05:52:34.742546498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-a-4938a72943,Uid:dcdfc224828253f066c0f50b3cf71fcc,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:34.770201 kubelet[2801]: E1013 05:52:34.770146 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-4938a72943&limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:52:34.780142 containerd[1726]: time="2025-10-13T05:52:34.780109187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-a-4938a72943,Uid:0bb80dcfa6fff60c3478bd44e978fd9a,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:34.908243 kubelet[2801]: I1013 05:52:34.908216 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.908725 kubelet[2801]: E1013 05:52:34.908703 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.24:6443/api/v1/nodes\": dial tcp 10.200.4.24:6443: connect: connection refused" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:34.973362 kubelet[2801]: E1013 05:52:34.973327 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-4938a72943?timeout=10s\": dial tcp 10.200.4.24:6443: connect: connection refused" interval="1.6s" Oct 13 05:52:35.057163 kubelet[2801]: E1013 05:52:35.057125 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:52:35.070834 kubelet[2801]: E1013 05:52:35.070800 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:52:35.626151 kubelet[2801]: E1013 05:52:35.626108 2801 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:52:35.693283 kubelet[2801]: E1013 05:52:35.693190 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:52:35.710792 kubelet[2801]: I1013 05:52:35.710765 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:35.711107 kubelet[2801]: E1013 05:52:35.711086 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.24:6443/api/v1/nodes\": dial tcp 10.200.4.24:6443: connect: connection refused" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:36.574403 kubelet[2801]: E1013 05:52:36.574364 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-4938a72943?timeout=10s\": dial tcp 10.200.4.24:6443: connect: connection refused" interval="3.2s" Oct 13 05:52:36.667348 kubelet[2801]: E1013 05:52:36.667312 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:52:37.312983 kubelet[2801]: I1013 05:52:37.312665 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:37.312983 kubelet[2801]: E1013 05:52:37.312937 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.24:6443/api/v1/nodes\": dial tcp 10.200.4.24:6443: connect: connection refused" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:37.504077 containerd[1726]: time="2025-10-13T05:52:37.504032363Z" level=info msg="connecting to shim 3ed9b4db2d932de43665b04ff7448d9bfe41d76f9652c8d72d450ec5c7c8502b" address="unix:///run/containerd/s/dbe72dd817aa4e080d60af76cec97f2707b9ff2155367add23b208e3b977179b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:37.506131 containerd[1726]: time="2025-10-13T05:52:37.506070093Z" level=info msg="connecting to shim 4dc4736afd0796ff28c7164f03e800c2ecd98a0211e8f346a261d3e527e4b462" address="unix:///run/containerd/s/e6adaf823167935ef9246217e578de5022284bb09e446253761cee89bbe49db4" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:37.524236 containerd[1726]: time="2025-10-13T05:52:37.523289113Z" level=info msg="connecting to shim 9c1f60fae21c1cb5f42d75cac558399aaf657296210e152f27d12d2730edbc9b" address="unix:///run/containerd/s/5f814674c261199d8da7cf8b5e217b51dd874abb7d85a345ca4d869641728c33" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:37.549446 systemd[1]: Started cri-containerd-3ed9b4db2d932de43665b04ff7448d9bfe41d76f9652c8d72d450ec5c7c8502b.scope - libcontainer container 3ed9b4db2d932de43665b04ff7448d9bfe41d76f9652c8d72d450ec5c7c8502b. Oct 13 05:52:37.551153 systemd[1]: Started cri-containerd-4dc4736afd0796ff28c7164f03e800c2ecd98a0211e8f346a261d3e527e4b462.scope - libcontainer container 4dc4736afd0796ff28c7164f03e800c2ecd98a0211e8f346a261d3e527e4b462. Oct 13 05:52:37.557667 systemd[1]: Started cri-containerd-9c1f60fae21c1cb5f42d75cac558399aaf657296210e152f27d12d2730edbc9b.scope - libcontainer container 9c1f60fae21c1cb5f42d75cac558399aaf657296210e152f27d12d2730edbc9b. Oct 13 05:52:37.637063 containerd[1726]: time="2025-10-13T05:52:37.636972596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-a-4938a72943,Uid:0bb80dcfa6fff60c3478bd44e978fd9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c1f60fae21c1cb5f42d75cac558399aaf657296210e152f27d12d2730edbc9b\"" Oct 13 05:52:37.639968 containerd[1726]: time="2025-10-13T05:52:37.639945043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-a-4938a72943,Uid:584781b50b03596067a542804e04a5ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ed9b4db2d932de43665b04ff7448d9bfe41d76f9652c8d72d450ec5c7c8502b\"" Oct 13 05:52:37.644732 containerd[1726]: time="2025-10-13T05:52:37.644438194Z" level=info msg="CreateContainer within sandbox \"9c1f60fae21c1cb5f42d75cac558399aaf657296210e152f27d12d2730edbc9b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:52:37.644732 containerd[1726]: time="2025-10-13T05:52:37.644671058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-a-4938a72943,Uid:dcdfc224828253f066c0f50b3cf71fcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dc4736afd0796ff28c7164f03e800c2ecd98a0211e8f346a261d3e527e4b462\"" Oct 13 05:52:37.649574 containerd[1726]: time="2025-10-13T05:52:37.649547815Z" level=info msg="CreateContainer within sandbox \"3ed9b4db2d932de43665b04ff7448d9bfe41d76f9652c8d72d450ec5c7c8502b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:52:37.654781 containerd[1726]: time="2025-10-13T05:52:37.654375203Z" level=info msg="CreateContainer within sandbox \"4dc4736afd0796ff28c7164f03e800c2ecd98a0211e8f346a261d3e527e4b462\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:52:37.674806 kubelet[2801]: E1013 05:52:37.674776 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-4938a72943&limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:52:37.677708 containerd[1726]: time="2025-10-13T05:52:37.677683212Z" level=info msg="Container ea7763dc422e857b888f842ae3780c7bfbe9b503c4649f039bb892f8d9e79c77: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:37.692486 containerd[1726]: time="2025-10-13T05:52:37.692460300Z" level=info msg="Container 18746c85d33ebb5e427c6d856e90fb3e0a3118712285ec50e4c15830d91e6d47: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:37.705867 containerd[1726]: time="2025-10-13T05:52:37.705839749Z" level=info msg="Container 696abf0095a26768c6c35c089bad74cbbe2a8831ff1782b7025901118f6a7959: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:37.727934 containerd[1726]: time="2025-10-13T05:52:37.727905002Z" level=info msg="CreateContainer within sandbox \"9c1f60fae21c1cb5f42d75cac558399aaf657296210e152f27d12d2730edbc9b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea7763dc422e857b888f842ae3780c7bfbe9b503c4649f039bb892f8d9e79c77\"" Oct 13 05:52:37.728396 containerd[1726]: time="2025-10-13T05:52:37.728372689Z" level=info msg="StartContainer for \"ea7763dc422e857b888f842ae3780c7bfbe9b503c4649f039bb892f8d9e79c77\"" Oct 13 05:52:37.729118 containerd[1726]: time="2025-10-13T05:52:37.729088644Z" level=info msg="connecting to shim ea7763dc422e857b888f842ae3780c7bfbe9b503c4649f039bb892f8d9e79c77" address="unix:///run/containerd/s/5f814674c261199d8da7cf8b5e217b51dd874abb7d85a345ca4d869641728c33" protocol=ttrpc version=3 Oct 13 05:52:37.749855 containerd[1726]: time="2025-10-13T05:52:37.749766357Z" level=info msg="CreateContainer within sandbox \"4dc4736afd0796ff28c7164f03e800c2ecd98a0211e8f346a261d3e527e4b462\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"696abf0095a26768c6c35c089bad74cbbe2a8831ff1782b7025901118f6a7959\"" Oct 13 05:52:37.750269 containerd[1726]: time="2025-10-13T05:52:37.750239449Z" level=info msg="StartContainer for \"696abf0095a26768c6c35c089bad74cbbe2a8831ff1782b7025901118f6a7959\"" Oct 13 05:52:37.750333 systemd[1]: Started cri-containerd-ea7763dc422e857b888f842ae3780c7bfbe9b503c4649f039bb892f8d9e79c77.scope - libcontainer container ea7763dc422e857b888f842ae3780c7bfbe9b503c4649f039bb892f8d9e79c77. Oct 13 05:52:37.754833 containerd[1726]: time="2025-10-13T05:52:37.754573548Z" level=info msg="connecting to shim 696abf0095a26768c6c35c089bad74cbbe2a8831ff1782b7025901118f6a7959" address="unix:///run/containerd/s/e6adaf823167935ef9246217e578de5022284bb09e446253761cee89bbe49db4" protocol=ttrpc version=3 Oct 13 05:52:37.756774 containerd[1726]: time="2025-10-13T05:52:37.756488279Z" level=info msg="CreateContainer within sandbox \"3ed9b4db2d932de43665b04ff7448d9bfe41d76f9652c8d72d450ec5c7c8502b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18746c85d33ebb5e427c6d856e90fb3e0a3118712285ec50e4c15830d91e6d47\"" Oct 13 05:52:37.757023 containerd[1726]: time="2025-10-13T05:52:37.756909980Z" level=info msg="StartContainer for \"18746c85d33ebb5e427c6d856e90fb3e0a3118712285ec50e4c15830d91e6d47\"" Oct 13 05:52:37.758477 containerd[1726]: time="2025-10-13T05:52:37.758445171Z" level=info msg="connecting to shim 18746c85d33ebb5e427c6d856e90fb3e0a3118712285ec50e4c15830d91e6d47" address="unix:///run/containerd/s/dbe72dd817aa4e080d60af76cec97f2707b9ff2155367add23b208e3b977179b" protocol=ttrpc version=3 Oct 13 05:52:37.786310 systemd[1]: Started cri-containerd-696abf0095a26768c6c35c089bad74cbbe2a8831ff1782b7025901118f6a7959.scope - libcontainer container 696abf0095a26768c6c35c089bad74cbbe2a8831ff1782b7025901118f6a7959. Oct 13 05:52:37.790350 systemd[1]: Started cri-containerd-18746c85d33ebb5e427c6d856e90fb3e0a3118712285ec50e4c15830d91e6d47.scope - libcontainer container 18746c85d33ebb5e427c6d856e90fb3e0a3118712285ec50e4c15830d91e6d47. Oct 13 05:52:37.817252 containerd[1726]: time="2025-10-13T05:52:37.817091296Z" level=info msg="StartContainer for \"ea7763dc422e857b888f842ae3780c7bfbe9b503c4649f039bb892f8d9e79c77\" returns successfully" Oct 13 05:52:37.823297 kubelet[2801]: E1013 05:52:37.822532 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:52:37.851259 containerd[1726]: time="2025-10-13T05:52:37.851234422Z" level=info msg="StartContainer for \"18746c85d33ebb5e427c6d856e90fb3e0a3118712285ec50e4c15830d91e6d47\" returns successfully" Oct 13 05:52:37.890088 containerd[1726]: time="2025-10-13T05:52:37.889982288Z" level=info msg="StartContainer for \"696abf0095a26768c6c35c089bad74cbbe2a8831ff1782b7025901118f6a7959\" returns successfully" Oct 13 05:52:38.204284 kubelet[2801]: E1013 05:52:38.203879 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:38.207188 kubelet[2801]: E1013 05:52:38.205749 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:38.209008 kubelet[2801]: E1013 05:52:38.208857 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:39.211434 kubelet[2801]: E1013 05:52:39.211409 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:39.212972 kubelet[2801]: E1013 05:52:39.211924 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:39.829746 kubelet[2801]: E1013 05:52:39.829693 2801 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:40.168033 kubelet[2801]: E1013 05:52:40.167930 2801 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-a-4938a72943" not found Oct 13 05:52:40.514871 kubelet[2801]: I1013 05:52:40.514804 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:40.516164 kubelet[2801]: E1013 05:52:40.516135 2801 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-a-4938a72943" not found Oct 13 05:52:40.526514 kubelet[2801]: I1013 05:52:40.526481 2801 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:40.526514 kubelet[2801]: E1013 05:52:40.526512 2801 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-a-4938a72943\": node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:40.541547 kubelet[2801]: E1013 05:52:40.541523 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:40.642594 kubelet[2801]: E1013 05:52:40.642550 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:40.743550 kubelet[2801]: E1013 05:52:40.743503 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:40.756783 kubelet[2801]: E1013 05:52:40.756751 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-4938a72943\" not found" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:40.844470 kubelet[2801]: E1013 05:52:40.844351 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:40.944903 kubelet[2801]: E1013 05:52:40.944870 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.045511 kubelet[2801]: E1013 05:52:41.045477 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.146528 kubelet[2801]: E1013 05:52:41.146424 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.247258 kubelet[2801]: E1013 05:52:41.247221 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.347738 kubelet[2801]: E1013 05:52:41.347641 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.448171 kubelet[2801]: E1013 05:52:41.448146 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.548552 kubelet[2801]: E1013 05:52:41.548513 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.649433 kubelet[2801]: E1013 05:52:41.649394 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.733539 systemd[1]: Reload requested from client PID 3078 ('systemctl') (unit session-9.scope)... Oct 13 05:52:41.733554 systemd[1]: Reloading... Oct 13 05:52:41.751209 kubelet[2801]: E1013 05:52:41.750377 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.808196 zram_generator::config[3128]: No configuration found. Oct 13 05:52:41.850508 kubelet[2801]: E1013 05:52:41.850468 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:41.951030 kubelet[2801]: E1013 05:52:41.950988 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:42.025871 systemd[1]: Reloading finished in 292 ms. Oct 13 05:52:42.051337 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:52:42.074017 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:52:42.074479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:52:42.074536 systemd[1]: kubelet.service: Consumed 1.062s CPU time, 132.3M memory peak. Oct 13 05:52:42.076014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:52:42.503316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:52:42.510499 (kubelet)[3192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:52:42.549402 kubelet[3192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:52:42.549402 kubelet[3192]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:52:42.549402 kubelet[3192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:52:42.549680 kubelet[3192]: I1013 05:52:42.549446 3192 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:52:42.554812 kubelet[3192]: I1013 05:52:42.554788 3192 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:52:42.554812 kubelet[3192]: I1013 05:52:42.554806 3192 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:52:42.554996 kubelet[3192]: I1013 05:52:42.554983 3192 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:52:42.558189 kubelet[3192]: I1013 05:52:42.556248 3192 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:52:42.560889 kubelet[3192]: I1013 05:52:42.560866 3192 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:52:42.564668 kubelet[3192]: I1013 05:52:42.564651 3192 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:52:42.569558 kubelet[3192]: I1013 05:52:42.569534 3192 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:52:42.569987 kubelet[3192]: I1013 05:52:42.569954 3192 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:52:42.570136 kubelet[3192]: I1013 05:52:42.569987 3192 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-a-4938a72943","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:52:42.570288 kubelet[3192]: I1013 05:52:42.570144 3192 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:52:42.570288 kubelet[3192]: I1013 05:52:42.570155 3192 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:52:42.570397 kubelet[3192]: I1013 05:52:42.570382 3192 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:52:42.570635 kubelet[3192]: I1013 05:52:42.570622 3192 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:52:42.570669 kubelet[3192]: I1013 05:52:42.570642 3192 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:52:42.571238 kubelet[3192]: I1013 05:52:42.571222 3192 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:52:42.571297 kubelet[3192]: I1013 05:52:42.571246 3192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:52:42.573575 kubelet[3192]: I1013 05:52:42.573558 3192 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:52:42.574036 kubelet[3192]: I1013 05:52:42.574018 3192 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:52:42.578990 kubelet[3192]: I1013 05:52:42.578964 3192 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:52:42.579065 kubelet[3192]: I1013 05:52:42.579012 3192 server.go:1289] "Started kubelet" Oct 13 05:52:42.583547 kubelet[3192]: I1013 05:52:42.583525 3192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:52:42.585440 kubelet[3192]: I1013 05:52:42.585230 3192 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:52:42.589178 kubelet[3192]: I1013 05:52:42.589145 3192 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:52:42.595219 kubelet[3192]: I1013 05:52:42.594795 3192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:52:42.595219 kubelet[3192]: I1013 05:52:42.595003 3192 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:52:42.596873 kubelet[3192]: I1013 05:52:42.596855 3192 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:52:42.597060 kubelet[3192]: E1013 05:52:42.597047 3192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-4938a72943\" not found" Oct 13 05:52:42.597991 kubelet[3192]: I1013 05:52:42.597974 3192 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:52:42.598100 kubelet[3192]: I1013 05:52:42.598081 3192 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:52:42.599371 kubelet[3192]: I1013 05:52:42.599348 3192 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:52:42.601843 kubelet[3192]: I1013 05:52:42.601819 3192 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:52:42.601925 kubelet[3192]: I1013 05:52:42.601905 3192 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:52:42.608832 kubelet[3192]: I1013 05:52:42.608814 3192 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:52:42.608961 kubelet[3192]: I1013 05:52:42.608934 3192 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:52:42.609984 kubelet[3192]: I1013 05:52:42.609967 3192 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:52:42.610301 kubelet[3192]: I1013 05:52:42.610064 3192 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:52:42.610301 kubelet[3192]: I1013 05:52:42.610083 3192 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:52:42.610301 kubelet[3192]: I1013 05:52:42.610090 3192 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:52:42.610301 kubelet[3192]: E1013 05:52:42.610120 3192 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:52:42.656854 kubelet[3192]: I1013 05:52:42.656835 3192 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:52:42.656854 kubelet[3192]: I1013 05:52:42.656847 3192 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:52:42.656950 kubelet[3192]: I1013 05:52:42.656864 3192 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:52:42.656986 kubelet[3192]: I1013 05:52:42.656974 3192 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:52:42.657012 kubelet[3192]: I1013 05:52:42.656986 3192 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:52:42.657012 kubelet[3192]: I1013 05:52:42.657002 3192 policy_none.go:49] "None policy: Start" Oct 13 05:52:42.657012 kubelet[3192]: I1013 05:52:42.657011 3192 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:52:42.657082 kubelet[3192]: I1013 05:52:42.657020 3192 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:52:42.657115 kubelet[3192]: I1013 05:52:42.657106 3192 state_mem.go:75] "Updated machine memory state" Oct 13 05:52:42.660078 kubelet[3192]: E1013 05:52:42.660059 3192 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:52:42.660208 kubelet[3192]: I1013 05:52:42.660198 3192 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:52:42.660243 kubelet[3192]: I1013 05:52:42.660212 3192 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:52:42.660955 kubelet[3192]: I1013 05:52:42.660793 3192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:52:42.663322 kubelet[3192]: E1013 05:52:42.663308 3192 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:52:42.711199 kubelet[3192]: I1013 05:52:42.711067 3192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.711199 kubelet[3192]: I1013 05:52:42.711151 3192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.712249 kubelet[3192]: I1013 05:52:42.711079 3192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.719972 kubelet[3192]: I1013 05:52:42.719847 3192 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:52:42.725197 kubelet[3192]: I1013 05:52:42.724694 3192 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:52:42.725462 kubelet[3192]: I1013 05:52:42.725352 3192 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:52:42.762327 kubelet[3192]: I1013 05:52:42.762022 3192 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.772572 kubelet[3192]: I1013 05:52:42.772540 3192 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.772645 kubelet[3192]: I1013 05:52:42.772599 3192 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899143 kubelet[3192]: I1013 05:52:42.899114 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/584781b50b03596067a542804e04a5ad-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-a-4938a72943\" (UID: \"584781b50b03596067a542804e04a5ad\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899278 kubelet[3192]: I1013 05:52:42.899148 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899278 kubelet[3192]: I1013 05:52:42.899193 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899278 kubelet[3192]: I1013 05:52:42.899211 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899278 kubelet[3192]: I1013 05:52:42.899233 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899278 kubelet[3192]: I1013 05:52:42.899249 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/584781b50b03596067a542804e04a5ad-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-a-4938a72943\" (UID: \"584781b50b03596067a542804e04a5ad\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899425 kubelet[3192]: I1013 05:52:42.899275 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/584781b50b03596067a542804e04a5ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-a-4938a72943\" (UID: \"584781b50b03596067a542804e04a5ad\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899425 kubelet[3192]: I1013 05:52:42.899294 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bb80dcfa6fff60c3478bd44e978fd9a-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-a-4938a72943\" (UID: \"0bb80dcfa6fff60c3478bd44e978fd9a\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" Oct 13 05:52:42.899425 kubelet[3192]: I1013 05:52:42.899312 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dcdfc224828253f066c0f50b3cf71fcc-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-a-4938a72943\" (UID: \"dcdfc224828253f066c0f50b3cf71fcc\") " pod="kube-system/kube-scheduler-ci-4459.1.0-a-4938a72943" Oct 13 05:52:43.572743 kubelet[3192]: I1013 05:52:43.572705 3192 apiserver.go:52] "Watching apiserver" Oct 13 05:52:43.599005 kubelet[3192]: I1013 05:52:43.598969 3192 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:52:43.644403 kubelet[3192]: I1013 05:52:43.644372 3192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:43.644926 kubelet[3192]: I1013 05:52:43.644907 3192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-a-4938a72943" Oct 13 05:52:43.655500 kubelet[3192]: I1013 05:52:43.655465 3192 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:52:43.655600 kubelet[3192]: E1013 05:52:43.655536 3192 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-a-4938a72943\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-a-4938a72943" Oct 13 05:52:43.656251 kubelet[3192]: I1013 05:52:43.656229 3192 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:52:43.656332 kubelet[3192]: E1013 05:52:43.656276 3192 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-a-4938a72943\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" Oct 13 05:52:43.671513 kubelet[3192]: I1013 05:52:43.671391 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-a-4938a72943" podStartSLOduration=1.671376419 podStartE2EDuration="1.671376419s" podCreationTimestamp="2025-10-13 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:52:43.662616256 +0000 UTC m=+1.149386775" watchObservedRunningTime="2025-10-13 05:52:43.671376419 +0000 UTC m=+1.158146933" Oct 13 05:52:43.689071 kubelet[3192]: I1013 05:52:43.689005 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-a-4938a72943" podStartSLOduration=1.688990897 podStartE2EDuration="1.688990897s" podCreationTimestamp="2025-10-13 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:52:43.67215195 +0000 UTC m=+1.158922471" watchObservedRunningTime="2025-10-13 05:52:43.688990897 +0000 UTC m=+1.175761420" Oct 13 05:52:43.702572 kubelet[3192]: I1013 05:52:43.702384 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-a-4938a72943" podStartSLOduration=1.702369443 podStartE2EDuration="1.702369443s" podCreationTimestamp="2025-10-13 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:52:43.689160204 +0000 UTC m=+1.175930719" watchObservedRunningTime="2025-10-13 05:52:43.702369443 +0000 UTC m=+1.189139966" Oct 13 05:52:46.688248 kubelet[3192]: I1013 05:52:46.688214 3192 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:52:46.688637 containerd[1726]: time="2025-10-13T05:52:46.688605458Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:52:46.688937 kubelet[3192]: I1013 05:52:46.688909 3192 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:52:47.753785 systemd[1]: Created slice kubepods-besteffort-poda853a74c_bf1b_471c_b67c_1404ebfdbd7d.slice - libcontainer container kubepods-besteffort-poda853a74c_bf1b_471c_b67c_1404ebfdbd7d.slice. Oct 13 05:52:47.823095 kubelet[3192]: I1013 05:52:47.823003 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a853a74c-bf1b-471c-b67c-1404ebfdbd7d-lib-modules\") pod \"kube-proxy-hpcft\" (UID: \"a853a74c-bf1b-471c-b67c-1404ebfdbd7d\") " pod="kube-system/kube-proxy-hpcft" Oct 13 05:52:47.823095 kubelet[3192]: I1013 05:52:47.823058 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fp9r\" (UniqueName: \"kubernetes.io/projected/a853a74c-bf1b-471c-b67c-1404ebfdbd7d-kube-api-access-7fp9r\") pod \"kube-proxy-hpcft\" (UID: \"a853a74c-bf1b-471c-b67c-1404ebfdbd7d\") " pod="kube-system/kube-proxy-hpcft" Oct 13 05:52:47.823421 kubelet[3192]: I1013 05:52:47.823179 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a853a74c-bf1b-471c-b67c-1404ebfdbd7d-kube-proxy\") pod \"kube-proxy-hpcft\" (UID: \"a853a74c-bf1b-471c-b67c-1404ebfdbd7d\") " pod="kube-system/kube-proxy-hpcft" Oct 13 05:52:47.823421 kubelet[3192]: I1013 05:52:47.823199 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a853a74c-bf1b-471c-b67c-1404ebfdbd7d-xtables-lock\") pod \"kube-proxy-hpcft\" (UID: \"a853a74c-bf1b-471c-b67c-1404ebfdbd7d\") " pod="kube-system/kube-proxy-hpcft" Oct 13 05:52:47.933534 systemd[1]: Created slice kubepods-besteffort-poda5b9869a_e409_46e7_9f4e_2145be664453.slice - libcontainer container kubepods-besteffort-poda5b9869a_e409_46e7_9f4e_2145be664453.slice. Oct 13 05:52:48.024137 kubelet[3192]: I1013 05:52:48.024026 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5b9869a-e409-46e7-9f4e-2145be664453-var-lib-calico\") pod \"tigera-operator-755d956888-c5lxr\" (UID: \"a5b9869a-e409-46e7-9f4e-2145be664453\") " pod="tigera-operator/tigera-operator-755d956888-c5lxr" Oct 13 05:52:48.024137 kubelet[3192]: I1013 05:52:48.024069 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmscz\" (UniqueName: \"kubernetes.io/projected/a5b9869a-e409-46e7-9f4e-2145be664453-kube-api-access-bmscz\") pod \"tigera-operator-755d956888-c5lxr\" (UID: \"a5b9869a-e409-46e7-9f4e-2145be664453\") " pod="tigera-operator/tigera-operator-755d956888-c5lxr" Oct 13 05:52:48.063095 containerd[1726]: time="2025-10-13T05:52:48.063047081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hpcft,Uid:a853a74c-bf1b-471c-b67c-1404ebfdbd7d,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:48.106214 containerd[1726]: time="2025-10-13T05:52:48.105380531Z" level=info msg="connecting to shim bb0bf4c29c666b5eae50efcbefe5cf622b5a333b0a3c4b96e13baedbf5a49870" address="unix:///run/containerd/s/aa8a4264dafaf5cf7b1a1a6805cbc9d7bf98b431ac9ed9a72f59c4f033c498be" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:48.133358 systemd[1]: Started cri-containerd-bb0bf4c29c666b5eae50efcbefe5cf622b5a333b0a3c4b96e13baedbf5a49870.scope - libcontainer container bb0bf4c29c666b5eae50efcbefe5cf622b5a333b0a3c4b96e13baedbf5a49870. Oct 13 05:52:48.159379 containerd[1726]: time="2025-10-13T05:52:48.159344179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hpcft,Uid:a853a74c-bf1b-471c-b67c-1404ebfdbd7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb0bf4c29c666b5eae50efcbefe5cf622b5a333b0a3c4b96e13baedbf5a49870\"" Oct 13 05:52:48.168292 containerd[1726]: time="2025-10-13T05:52:48.168261749Z" level=info msg="CreateContainer within sandbox \"bb0bf4c29c666b5eae50efcbefe5cf622b5a333b0a3c4b96e13baedbf5a49870\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:52:48.192986 containerd[1726]: time="2025-10-13T05:52:48.192130551Z" level=info msg="Container 4a7826d84b034de7b292ec8ccd6df09d872bdaee4d1dcf8c5ec87d91424b2751: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:48.213726 containerd[1726]: time="2025-10-13T05:52:48.213695927Z" level=info msg="CreateContainer within sandbox \"bb0bf4c29c666b5eae50efcbefe5cf622b5a333b0a3c4b96e13baedbf5a49870\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a7826d84b034de7b292ec8ccd6df09d872bdaee4d1dcf8c5ec87d91424b2751\"" Oct 13 05:52:48.214461 containerd[1726]: time="2025-10-13T05:52:48.214433023Z" level=info msg="StartContainer for \"4a7826d84b034de7b292ec8ccd6df09d872bdaee4d1dcf8c5ec87d91424b2751\"" Oct 13 05:52:48.216018 containerd[1726]: time="2025-10-13T05:52:48.215991474Z" level=info msg="connecting to shim 4a7826d84b034de7b292ec8ccd6df09d872bdaee4d1dcf8c5ec87d91424b2751" address="unix:///run/containerd/s/aa8a4264dafaf5cf7b1a1a6805cbc9d7bf98b431ac9ed9a72f59c4f033c498be" protocol=ttrpc version=3 Oct 13 05:52:48.235351 systemd[1]: Started cri-containerd-4a7826d84b034de7b292ec8ccd6df09d872bdaee4d1dcf8c5ec87d91424b2751.scope - libcontainer container 4a7826d84b034de7b292ec8ccd6df09d872bdaee4d1dcf8c5ec87d91424b2751. Oct 13 05:52:48.238998 containerd[1726]: time="2025-10-13T05:52:48.238871461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-c5lxr,Uid:a5b9869a-e409-46e7-9f4e-2145be664453,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:52:48.271235 containerd[1726]: time="2025-10-13T05:52:48.271207068Z" level=info msg="StartContainer for \"4a7826d84b034de7b292ec8ccd6df09d872bdaee4d1dcf8c5ec87d91424b2751\" returns successfully" Oct 13 05:52:48.280644 containerd[1726]: time="2025-10-13T05:52:48.280531517Z" level=info msg="connecting to shim 0f7b50f01aaef3feb47979bb1db7c2467e527e8ae80d1a83cd6c016245244d09" address="unix:///run/containerd/s/b08fba1000c1cb2a4f81d38c6cc000bc366c2054d671417bba2144db8b49ebcb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:48.302331 systemd[1]: Started cri-containerd-0f7b50f01aaef3feb47979bb1db7c2467e527e8ae80d1a83cd6c016245244d09.scope - libcontainer container 0f7b50f01aaef3feb47979bb1db7c2467e527e8ae80d1a83cd6c016245244d09. Oct 13 05:52:48.350576 containerd[1726]: time="2025-10-13T05:52:48.350537144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-c5lxr,Uid:a5b9869a-e409-46e7-9f4e-2145be664453,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0f7b50f01aaef3feb47979bb1db7c2467e527e8ae80d1a83cd6c016245244d09\"" Oct 13 05:52:48.354050 containerd[1726]: time="2025-10-13T05:52:48.354021987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:52:49.763285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302544520.mount: Deactivated successfully. Oct 13 05:52:50.325887 containerd[1726]: time="2025-10-13T05:52:50.325842816Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:50.331506 containerd[1726]: time="2025-10-13T05:52:50.331466446Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:52:50.333854 containerd[1726]: time="2025-10-13T05:52:50.333815792Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:50.337878 containerd[1726]: time="2025-10-13T05:52:50.337834087Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:50.338251 containerd[1726]: time="2025-10-13T05:52:50.338228523Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.984090894s" Oct 13 05:52:50.338303 containerd[1726]: time="2025-10-13T05:52:50.338259026Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:52:50.343689 containerd[1726]: time="2025-10-13T05:52:50.343655905Z" level=info msg="CreateContainer within sandbox \"0f7b50f01aaef3feb47979bb1db7c2467e527e8ae80d1a83cd6c016245244d09\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:52:50.363536 containerd[1726]: time="2025-10-13T05:52:50.363505199Z" level=info msg="Container 01089a8eca24ed167befc5d2d3a8fceada7c9c72d31ba5964b2183b420afe3b1: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:50.375008 containerd[1726]: time="2025-10-13T05:52:50.374980117Z" level=info msg="CreateContainer within sandbox \"0f7b50f01aaef3feb47979bb1db7c2467e527e8ae80d1a83cd6c016245244d09\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"01089a8eca24ed167befc5d2d3a8fceada7c9c72d31ba5964b2183b420afe3b1\"" Oct 13 05:52:50.375905 containerd[1726]: time="2025-10-13T05:52:50.375386603Z" level=info msg="StartContainer for \"01089a8eca24ed167befc5d2d3a8fceada7c9c72d31ba5964b2183b420afe3b1\"" Oct 13 05:52:50.376436 containerd[1726]: time="2025-10-13T05:52:50.376380863Z" level=info msg="connecting to shim 01089a8eca24ed167befc5d2d3a8fceada7c9c72d31ba5964b2183b420afe3b1" address="unix:///run/containerd/s/b08fba1000c1cb2a4f81d38c6cc000bc366c2054d671417bba2144db8b49ebcb" protocol=ttrpc version=3 Oct 13 05:52:50.397316 systemd[1]: Started cri-containerd-01089a8eca24ed167befc5d2d3a8fceada7c9c72d31ba5964b2183b420afe3b1.scope - libcontainer container 01089a8eca24ed167befc5d2d3a8fceada7c9c72d31ba5964b2183b420afe3b1. Oct 13 05:52:50.422406 containerd[1726]: time="2025-10-13T05:52:50.422372623Z" level=info msg="StartContainer for \"01089a8eca24ed167befc5d2d3a8fceada7c9c72d31ba5964b2183b420afe3b1\" returns successfully" Oct 13 05:52:50.671868 kubelet[3192]: I1013 05:52:50.671375 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hpcft" podStartSLOduration=3.671358262 podStartE2EDuration="3.671358262s" podCreationTimestamp="2025-10-13 05:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:52:48.66460004 +0000 UTC m=+6.151370564" watchObservedRunningTime="2025-10-13 05:52:50.671358262 +0000 UTC m=+8.158128865" Oct 13 05:52:51.562874 kubelet[3192]: I1013 05:52:51.562490 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-c5lxr" podStartSLOduration=2.575919009 podStartE2EDuration="4.56246956s" podCreationTimestamp="2025-10-13 05:52:47 +0000 UTC" firstStartedPulling="2025-10-13 05:52:48.352524401 +0000 UTC m=+5.839294927" lastFinishedPulling="2025-10-13 05:52:50.339074964 +0000 UTC m=+7.825845478" observedRunningTime="2025-10-13 05:52:50.672545164 +0000 UTC m=+8.159315682" watchObservedRunningTime="2025-10-13 05:52:51.56246956 +0000 UTC m=+9.049240081" Oct 13 05:52:56.221890 sudo[2156]: pam_unix(sudo:session): session closed for user root Oct 13 05:52:56.337346 sshd[2155]: Connection closed by 10.200.16.10 port 41590 Oct 13 05:52:56.336740 sshd-session[2152]: pam_unix(sshd:session): session closed for user core Oct 13 05:52:56.341592 systemd-logind[1700]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:52:56.343910 systemd[1]: sshd@6-10.200.4.24:22-10.200.16.10:41590.service: Deactivated successfully. Oct 13 05:52:56.348419 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:52:56.348635 systemd[1]: session-9.scope: Consumed 3.570s CPU time, 232.5M memory peak. Oct 13 05:52:56.352582 systemd-logind[1700]: Removed session 9. Oct 13 05:52:59.767034 systemd[1]: Created slice kubepods-besteffort-pod1d4c0ca9_32eb_4641_8c84_f780bfe63c0c.slice - libcontainer container kubepods-besteffort-pod1d4c0ca9_32eb_4641_8c84_f780bfe63c0c.slice. Oct 13 05:52:59.798664 kubelet[3192]: I1013 05:52:59.798626 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtv9\" (UniqueName: \"kubernetes.io/projected/1d4c0ca9-32eb-4641-8c84-f780bfe63c0c-kube-api-access-wrtv9\") pod \"calico-typha-666d766fc-dt2hk\" (UID: \"1d4c0ca9-32eb-4641-8c84-f780bfe63c0c\") " pod="calico-system/calico-typha-666d766fc-dt2hk" Oct 13 05:52:59.799226 kubelet[3192]: I1013 05:52:59.799099 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1d4c0ca9-32eb-4641-8c84-f780bfe63c0c-typha-certs\") pod \"calico-typha-666d766fc-dt2hk\" (UID: \"1d4c0ca9-32eb-4641-8c84-f780bfe63c0c\") " pod="calico-system/calico-typha-666d766fc-dt2hk" Oct 13 05:52:59.799226 kubelet[3192]: I1013 05:52:59.799130 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d4c0ca9-32eb-4641-8c84-f780bfe63c0c-tigera-ca-bundle\") pod \"calico-typha-666d766fc-dt2hk\" (UID: \"1d4c0ca9-32eb-4641-8c84-f780bfe63c0c\") " pod="calico-system/calico-typha-666d766fc-dt2hk" Oct 13 05:53:00.073208 containerd[1726]: time="2025-10-13T05:53:00.073076824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-666d766fc-dt2hk,Uid:1d4c0ca9-32eb-4641-8c84-f780bfe63c0c,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:00.121441 systemd[1]: Created slice kubepods-besteffort-pod13f70649_c0b2_42a1_943b_830675519e54.slice - libcontainer container kubepods-besteffort-pod13f70649_c0b2_42a1_943b_830675519e54.slice. Oct 13 05:53:00.125976 containerd[1726]: time="2025-10-13T05:53:00.125928735Z" level=info msg="connecting to shim 87e0890ca07737f8f736229cf8274a9f624de58c10181d3b375354f10e9edec6" address="unix:///run/containerd/s/2cc68746482069b646a75c0562369bd90de7e1bc9f3f3c295caea95b471d0574" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:00.160332 systemd[1]: Started cri-containerd-87e0890ca07737f8f736229cf8274a9f624de58c10181d3b375354f10e9edec6.scope - libcontainer container 87e0890ca07737f8f736229cf8274a9f624de58c10181d3b375354f10e9edec6. Oct 13 05:53:00.202096 kubelet[3192]: I1013 05:53:00.201089 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13f70649-c0b2-42a1-943b-830675519e54-tigera-ca-bundle\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202096 kubelet[3192]: I1013 05:53:00.201875 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-cni-log-dir\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202096 kubelet[3192]: I1013 05:53:00.201908 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13f70649-c0b2-42a1-943b-830675519e54-node-certs\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202096 kubelet[3192]: I1013 05:53:00.201967 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-cni-net-dir\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202096 kubelet[3192]: I1013 05:53:00.201986 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-policysync\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202324 kubelet[3192]: I1013 05:53:00.202028 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-var-run-calico\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202324 kubelet[3192]: I1013 05:53:00.202049 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-xtables-lock\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202604 kubelet[3192]: I1013 05:53:00.202405 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtlz\" (UniqueName: \"kubernetes.io/projected/13f70649-c0b2-42a1-943b-830675519e54-kube-api-access-wgtlz\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202604 kubelet[3192]: I1013 05:53:00.202439 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-flexvol-driver-host\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202604 kubelet[3192]: I1013 05:53:00.202486 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-cni-bin-dir\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202604 kubelet[3192]: I1013 05:53:00.202514 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-lib-modules\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.202604 kubelet[3192]: I1013 05:53:00.202558 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13f70649-c0b2-42a1-943b-830675519e54-var-lib-calico\") pod \"calico-node-qczxh\" (UID: \"13f70649-c0b2-42a1-943b-830675519e54\") " pod="calico-system/calico-node-qczxh" Oct 13 05:53:00.207758 containerd[1726]: time="2025-10-13T05:53:00.207724931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-666d766fc-dt2hk,Uid:1d4c0ca9-32eb-4641-8c84-f780bfe63c0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"87e0890ca07737f8f736229cf8274a9f624de58c10181d3b375354f10e9edec6\"" Oct 13 05:53:00.209202 containerd[1726]: time="2025-10-13T05:53:00.209163324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:53:00.305436 kubelet[3192]: E1013 05:53:00.305386 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.305436 kubelet[3192]: W1013 05:53:00.305432 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.305554 kubelet[3192]: E1013 05:53:00.305456 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.311190 kubelet[3192]: E1013 05:53:00.308468 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.311190 kubelet[3192]: W1013 05:53:00.308511 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.311190 kubelet[3192]: E1013 05:53:00.308526 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.316827 kubelet[3192]: E1013 05:53:00.316807 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.316827 kubelet[3192]: W1013 05:53:00.316821 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.316941 kubelet[3192]: E1013 05:53:00.316835 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.407668 kubelet[3192]: E1013 05:53:00.407363 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2nwjz" podUID="92fa068c-877d-4748-8370-0fa59cfeb840" Oct 13 05:53:00.424444 containerd[1726]: time="2025-10-13T05:53:00.424403414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qczxh,Uid:13f70649-c0b2-42a1-943b-830675519e54,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:00.474278 containerd[1726]: time="2025-10-13T05:53:00.474240248Z" level=info msg="connecting to shim db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59" address="unix:///run/containerd/s/7f815196c74d7ecb7d0f58aa9ea8786c8fcf88926a32c3a776e6482b0caf3b35" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:00.491002 kubelet[3192]: E1013 05:53:00.490976 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.491002 kubelet[3192]: W1013 05:53:00.490995 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.491130 kubelet[3192]: E1013 05:53:00.491012 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.491160 kubelet[3192]: E1013 05:53:00.491151 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.491160 kubelet[3192]: W1013 05:53:00.491157 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.491244 kubelet[3192]: E1013 05:53:00.491165 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.491336 kubelet[3192]: E1013 05:53:00.491321 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.491369 kubelet[3192]: W1013 05:53:00.491330 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.491369 kubelet[3192]: E1013 05:53:00.491347 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.491537 kubelet[3192]: E1013 05:53:00.491525 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.491537 kubelet[3192]: W1013 05:53:00.491533 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.491596 kubelet[3192]: E1013 05:53:00.491540 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.491688 kubelet[3192]: E1013 05:53:00.491677 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.491688 kubelet[3192]: W1013 05:53:00.491684 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.491746 kubelet[3192]: E1013 05:53:00.491691 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.491801 kubelet[3192]: E1013 05:53:00.491790 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.491801 kubelet[3192]: W1013 05:53:00.491797 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.491852 kubelet[3192]: E1013 05:53:00.491803 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.491927 kubelet[3192]: E1013 05:53:00.491916 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.491927 kubelet[3192]: W1013 05:53:00.491922 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.491978 kubelet[3192]: E1013 05:53:00.491928 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.492072 kubelet[3192]: E1013 05:53:00.492060 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.492072 kubelet[3192]: W1013 05:53:00.492068 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.492126 kubelet[3192]: E1013 05:53:00.492075 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.492222 kubelet[3192]: E1013 05:53:00.492210 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.492222 kubelet[3192]: W1013 05:53:00.492217 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.492284 kubelet[3192]: E1013 05:53:00.492224 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.492358 kubelet[3192]: E1013 05:53:00.492347 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.492358 kubelet[3192]: W1013 05:53:00.492354 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.492439 kubelet[3192]: E1013 05:53:00.492360 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.492465 kubelet[3192]: E1013 05:53:00.492454 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.492465 kubelet[3192]: W1013 05:53:00.492459 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.492516 kubelet[3192]: E1013 05:53:00.492464 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.493205 kubelet[3192]: E1013 05:53:00.492559 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.493205 kubelet[3192]: W1013 05:53:00.492565 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.493205 kubelet[3192]: E1013 05:53:00.492579 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.493205 kubelet[3192]: E1013 05:53:00.492679 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.493205 kubelet[3192]: W1013 05:53:00.492683 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.493205 kubelet[3192]: E1013 05:53:00.492690 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.493205 kubelet[3192]: E1013 05:53:00.492852 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.493205 kubelet[3192]: W1013 05:53:00.492860 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.493205 kubelet[3192]: E1013 05:53:00.492869 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.493205 kubelet[3192]: E1013 05:53:00.493130 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.493842 kubelet[3192]: W1013 05:53:00.493139 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.493842 kubelet[3192]: E1013 05:53:00.493150 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.493375 systemd[1]: Started cri-containerd-db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59.scope - libcontainer container db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59. Oct 13 05:53:00.494083 kubelet[3192]: E1013 05:53:00.494070 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.494115 kubelet[3192]: W1013 05:53:00.494084 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.494115 kubelet[3192]: E1013 05:53:00.494097 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.494306 kubelet[3192]: E1013 05:53:00.494297 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.494348 kubelet[3192]: W1013 05:53:00.494307 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.494348 kubelet[3192]: E1013 05:53:00.494315 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.494525 kubelet[3192]: E1013 05:53:00.494420 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.494525 kubelet[3192]: W1013 05:53:00.494437 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.494525 kubelet[3192]: E1013 05:53:00.494444 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.494652 kubelet[3192]: E1013 05:53:00.494636 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.494682 kubelet[3192]: W1013 05:53:00.494658 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.494682 kubelet[3192]: E1013 05:53:00.494666 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.494943 kubelet[3192]: E1013 05:53:00.494763 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.494943 kubelet[3192]: W1013 05:53:00.494769 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.494943 kubelet[3192]: E1013 05:53:00.494775 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.505539 kubelet[3192]: E1013 05:53:00.505517 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.505539 kubelet[3192]: W1013 05:53:00.505532 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.505671 kubelet[3192]: E1013 05:53:00.505545 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.505671 kubelet[3192]: I1013 05:53:00.505569 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92fa068c-877d-4748-8370-0fa59cfeb840-registration-dir\") pod \"csi-node-driver-2nwjz\" (UID: \"92fa068c-877d-4748-8370-0fa59cfeb840\") " pod="calico-system/csi-node-driver-2nwjz" Oct 13 05:53:00.505814 kubelet[3192]: E1013 05:53:00.505783 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.505814 kubelet[3192]: W1013 05:53:00.505811 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.505870 kubelet[3192]: E1013 05:53:00.505821 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.505870 kubelet[3192]: I1013 05:53:00.505851 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92fa068c-877d-4748-8370-0fa59cfeb840-kubelet-dir\") pod \"csi-node-driver-2nwjz\" (UID: \"92fa068c-877d-4748-8370-0fa59cfeb840\") " pod="calico-system/csi-node-driver-2nwjz" Oct 13 05:53:00.506331 kubelet[3192]: E1013 05:53:00.506138 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.506331 kubelet[3192]: W1013 05:53:00.506327 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.506410 kubelet[3192]: E1013 05:53:00.506339 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.506410 kubelet[3192]: I1013 05:53:00.506364 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92fa068c-877d-4748-8370-0fa59cfeb840-socket-dir\") pod \"csi-node-driver-2nwjz\" (UID: \"92fa068c-877d-4748-8370-0fa59cfeb840\") " pod="calico-system/csi-node-driver-2nwjz" Oct 13 05:53:00.506700 kubelet[3192]: E1013 05:53:00.506684 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.506700 kubelet[3192]: W1013 05:53:00.506696 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.506766 kubelet[3192]: E1013 05:53:00.506707 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.506894 kubelet[3192]: I1013 05:53:00.506784 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/92fa068c-877d-4748-8370-0fa59cfeb840-varrun\") pod \"csi-node-driver-2nwjz\" (UID: \"92fa068c-877d-4748-8370-0fa59cfeb840\") " pod="calico-system/csi-node-driver-2nwjz" Oct 13 05:53:00.507042 kubelet[3192]: E1013 05:53:00.507032 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.507074 kubelet[3192]: W1013 05:53:00.507043 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.507074 kubelet[3192]: E1013 05:53:00.507053 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.507218 kubelet[3192]: E1013 05:53:00.507208 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510200 kubelet[3192]: W1013 05:53:00.507218 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510200 kubelet[3192]: E1013 05:53:00.507226 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510200 kubelet[3192]: E1013 05:53:00.507370 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510200 kubelet[3192]: W1013 05:53:00.507375 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510200 kubelet[3192]: E1013 05:53:00.507383 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510200 kubelet[3192]: E1013 05:53:00.507505 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510200 kubelet[3192]: W1013 05:53:00.507511 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510200 kubelet[3192]: E1013 05:53:00.507518 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510200 kubelet[3192]: E1013 05:53:00.507632 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510200 kubelet[3192]: W1013 05:53:00.507638 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510513 kubelet[3192]: E1013 05:53:00.507651 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510513 kubelet[3192]: I1013 05:53:00.507679 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchv4\" (UniqueName: \"kubernetes.io/projected/92fa068c-877d-4748-8370-0fa59cfeb840-kube-api-access-xchv4\") pod \"csi-node-driver-2nwjz\" (UID: \"92fa068c-877d-4748-8370-0fa59cfeb840\") " pod="calico-system/csi-node-driver-2nwjz" Oct 13 05:53:00.510513 kubelet[3192]: E1013 05:53:00.507786 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510513 kubelet[3192]: W1013 05:53:00.507792 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510513 kubelet[3192]: E1013 05:53:00.507811 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510513 kubelet[3192]: E1013 05:53:00.507916 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510513 kubelet[3192]: W1013 05:53:00.507921 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510513 kubelet[3192]: E1013 05:53:00.507927 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510513 kubelet[3192]: E1013 05:53:00.508070 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510723 kubelet[3192]: W1013 05:53:00.508076 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510723 kubelet[3192]: E1013 05:53:00.508083 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510723 kubelet[3192]: E1013 05:53:00.508221 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510723 kubelet[3192]: W1013 05:53:00.508226 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510723 kubelet[3192]: E1013 05:53:00.508233 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510723 kubelet[3192]: E1013 05:53:00.508353 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510723 kubelet[3192]: W1013 05:53:00.508365 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510723 kubelet[3192]: E1013 05:53:00.508372 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.510723 kubelet[3192]: E1013 05:53:00.508718 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.510723 kubelet[3192]: W1013 05:53:00.508725 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.510947 kubelet[3192]: E1013 05:53:00.508733 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.533079 containerd[1726]: time="2025-10-13T05:53:00.532866664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qczxh,Uid:13f70649-c0b2-42a1-943b-830675519e54,Namespace:calico-system,Attempt:0,} returns sandbox id \"db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59\"" Oct 13 05:53:00.609202 kubelet[3192]: E1013 05:53:00.609145 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.609202 kubelet[3192]: W1013 05:53:00.609180 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.609676 kubelet[3192]: E1013 05:53:00.609651 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.609919 kubelet[3192]: E1013 05:53:00.609906 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.609980 kubelet[3192]: W1013 05:53:00.609919 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.609980 kubelet[3192]: E1013 05:53:00.609932 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.610421 kubelet[3192]: E1013 05:53:00.610292 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.610421 kubelet[3192]: W1013 05:53:00.610306 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.610421 kubelet[3192]: E1013 05:53:00.610318 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.611320 kubelet[3192]: E1013 05:53:00.611297 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.611395 kubelet[3192]: W1013 05:53:00.611318 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.611395 kubelet[3192]: E1013 05:53:00.611339 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.612281 kubelet[3192]: E1013 05:53:00.612255 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.612753 kubelet[3192]: W1013 05:53:00.612411 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.612753 kubelet[3192]: E1013 05:53:00.612430 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.613193 kubelet[3192]: E1013 05:53:00.613059 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.613193 kubelet[3192]: W1013 05:53:00.613072 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.613193 kubelet[3192]: E1013 05:53:00.613094 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.613979 kubelet[3192]: E1013 05:53:00.613902 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.613979 kubelet[3192]: W1013 05:53:00.613917 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.613979 kubelet[3192]: E1013 05:53:00.613930 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.614361 kubelet[3192]: E1013 05:53:00.614331 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.614673 kubelet[3192]: W1013 05:53:00.614600 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.614673 kubelet[3192]: E1013 05:53:00.614617 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.615135 kubelet[3192]: E1013 05:53:00.615119 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.615135 kubelet[3192]: W1013 05:53:00.615131 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.615663 kubelet[3192]: E1013 05:53:00.615144 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.616302 kubelet[3192]: E1013 05:53:00.616160 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.616302 kubelet[3192]: W1013 05:53:00.616298 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.616394 kubelet[3192]: E1013 05:53:00.616314 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.616578 kubelet[3192]: E1013 05:53:00.616567 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.616837 kubelet[3192]: W1013 05:53:00.616576 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.616885 kubelet[3192]: E1013 05:53:00.616840 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.617035 kubelet[3192]: E1013 05:53:00.617023 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.617066 kubelet[3192]: W1013 05:53:00.617033 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.617066 kubelet[3192]: E1013 05:53:00.617044 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.617675 kubelet[3192]: E1013 05:53:00.617647 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.617675 kubelet[3192]: W1013 05:53:00.617675 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.617773 kubelet[3192]: E1013 05:53:00.617687 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.617903 kubelet[3192]: E1013 05:53:00.617860 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.617903 kubelet[3192]: W1013 05:53:00.617868 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.617903 kubelet[3192]: E1013 05:53:00.617877 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.618068 kubelet[3192]: E1013 05:53:00.618054 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.618068 kubelet[3192]: W1013 05:53:00.618061 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.618127 kubelet[3192]: E1013 05:53:00.618070 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.618233 kubelet[3192]: E1013 05:53:00.618211 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.618233 kubelet[3192]: W1013 05:53:00.618218 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.618233 kubelet[3192]: E1013 05:53:00.618225 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.618643 kubelet[3192]: E1013 05:53:00.618628 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.618643 kubelet[3192]: W1013 05:53:00.618640 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.618732 kubelet[3192]: E1013 05:53:00.618650 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.618959 kubelet[3192]: E1013 05:53:00.618948 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.618959 kubelet[3192]: W1013 05:53:00.618958 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.619022 kubelet[3192]: E1013 05:53:00.618968 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.619146 kubelet[3192]: E1013 05:53:00.619136 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.619409 kubelet[3192]: W1013 05:53:00.619144 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.619409 kubelet[3192]: E1013 05:53:00.619392 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.619720 kubelet[3192]: E1013 05:53:00.619653 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.619720 kubelet[3192]: W1013 05:53:00.619661 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.619720 kubelet[3192]: E1013 05:53:00.619671 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.619866 kubelet[3192]: E1013 05:53:00.619857 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.619935 kubelet[3192]: W1013 05:53:00.619867 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.619935 kubelet[3192]: E1013 05:53:00.619877 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.620090 kubelet[3192]: E1013 05:53:00.620059 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.620090 kubelet[3192]: W1013 05:53:00.620081 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.620272 kubelet[3192]: E1013 05:53:00.620090 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.620299 kubelet[3192]: E1013 05:53:00.620276 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.620299 kubelet[3192]: W1013 05:53:00.620283 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.620299 kubelet[3192]: E1013 05:53:00.620291 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.620787 kubelet[3192]: E1013 05:53:00.620460 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.620787 kubelet[3192]: W1013 05:53:00.620498 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.620787 kubelet[3192]: E1013 05:53:00.620507 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.620881 kubelet[3192]: E1013 05:53:00.620857 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.620881 kubelet[3192]: W1013 05:53:00.620865 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.620881 kubelet[3192]: E1013 05:53:00.620874 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:00.626763 kubelet[3192]: E1013 05:53:00.626750 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:00.626763 kubelet[3192]: W1013 05:53:00.626763 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:00.626841 kubelet[3192]: E1013 05:53:00.626774 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:01.507095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806059777.mount: Deactivated successfully. Oct 13 05:53:02.611336 kubelet[3192]: E1013 05:53:02.611296 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2nwjz" podUID="92fa068c-877d-4748-8370-0fa59cfeb840" Oct 13 05:53:02.657835 containerd[1726]: time="2025-10-13T05:53:02.657796430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:02.659678 containerd[1726]: time="2025-10-13T05:53:02.659649597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:53:02.662519 containerd[1726]: time="2025-10-13T05:53:02.662377526Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:02.668350 containerd[1726]: time="2025-10-13T05:53:02.667582845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:02.668350 containerd[1726]: time="2025-10-13T05:53:02.667868504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.458552683s" Oct 13 05:53:02.668350 containerd[1726]: time="2025-10-13T05:53:02.667891874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:53:02.670389 containerd[1726]: time="2025-10-13T05:53:02.670331089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:53:02.689548 containerd[1726]: time="2025-10-13T05:53:02.689518547Z" level=info msg="CreateContainer within sandbox \"87e0890ca07737f8f736229cf8274a9f624de58c10181d3b375354f10e9edec6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:53:02.710201 containerd[1726]: time="2025-10-13T05:53:02.709363051Z" level=info msg="Container 0cf00c5801044a73ddce5d5537220d3b0ebd06a3ab622c5a50506296e839d0bc: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:02.714222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871315107.mount: Deactivated successfully. Oct 13 05:53:02.729306 containerd[1726]: time="2025-10-13T05:53:02.729280631Z" level=info msg="CreateContainer within sandbox \"87e0890ca07737f8f736229cf8274a9f624de58c10181d3b375354f10e9edec6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0cf00c5801044a73ddce5d5537220d3b0ebd06a3ab622c5a50506296e839d0bc\"" Oct 13 05:53:02.729820 containerd[1726]: time="2025-10-13T05:53:02.729797787Z" level=info msg="StartContainer for \"0cf00c5801044a73ddce5d5537220d3b0ebd06a3ab622c5a50506296e839d0bc\"" Oct 13 05:53:02.730994 containerd[1726]: time="2025-10-13T05:53:02.730965893Z" level=info msg="connecting to shim 0cf00c5801044a73ddce5d5537220d3b0ebd06a3ab622c5a50506296e839d0bc" address="unix:///run/containerd/s/2cc68746482069b646a75c0562369bd90de7e1bc9f3f3c295caea95b471d0574" protocol=ttrpc version=3 Oct 13 05:53:02.750334 systemd[1]: Started cri-containerd-0cf00c5801044a73ddce5d5537220d3b0ebd06a3ab622c5a50506296e839d0bc.scope - libcontainer container 0cf00c5801044a73ddce5d5537220d3b0ebd06a3ab622c5a50506296e839d0bc. Oct 13 05:53:02.799694 containerd[1726]: time="2025-10-13T05:53:02.799584787Z" level=info msg="StartContainer for \"0cf00c5801044a73ddce5d5537220d3b0ebd06a3ab622c5a50506296e839d0bc\" returns successfully" Oct 13 05:53:03.699118 kubelet[3192]: I1013 05:53:03.698419 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-666d766fc-dt2hk" podStartSLOduration=2.238172112 podStartE2EDuration="4.698403467s" podCreationTimestamp="2025-10-13 05:52:59 +0000 UTC" firstStartedPulling="2025-10-13 05:53:00.208919431 +0000 UTC m=+17.695689958" lastFinishedPulling="2025-10-13 05:53:02.669150806 +0000 UTC m=+20.155921313" observedRunningTime="2025-10-13 05:53:03.69781951 +0000 UTC m=+21.184590032" watchObservedRunningTime="2025-10-13 05:53:03.698403467 +0000 UTC m=+21.185173995" Oct 13 05:53:03.713497 kubelet[3192]: E1013 05:53:03.713420 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.713497 kubelet[3192]: W1013 05:53:03.713439 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.713497 kubelet[3192]: E1013 05:53:03.713457 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.714606 kubelet[3192]: E1013 05:53:03.714575 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.714606 kubelet[3192]: W1013 05:53:03.714600 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.714731 kubelet[3192]: E1013 05:53:03.714618 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.714758 kubelet[3192]: E1013 05:53:03.714742 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.714758 kubelet[3192]: W1013 05:53:03.714747 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.714758 kubelet[3192]: E1013 05:53:03.714754 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.714948 kubelet[3192]: E1013 05:53:03.714923 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.714948 kubelet[3192]: W1013 05:53:03.714945 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.715014 kubelet[3192]: E1013 05:53:03.714954 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.715709 kubelet[3192]: E1013 05:53:03.715110 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.715709 kubelet[3192]: W1013 05:53:03.715132 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.715917 kubelet[3192]: E1013 05:53:03.715142 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.716094 kubelet[3192]: E1013 05:53:03.716040 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.716094 kubelet[3192]: W1013 05:53:03.716052 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.716094 kubelet[3192]: E1013 05:53:03.716065 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.716399 kubelet[3192]: E1013 05:53:03.716341 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.716399 kubelet[3192]: W1013 05:53:03.716351 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.716399 kubelet[3192]: E1013 05:53:03.716362 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.716717 kubelet[3192]: E1013 05:53:03.716642 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.716717 kubelet[3192]: W1013 05:53:03.716651 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.716717 kubelet[3192]: E1013 05:53:03.716663 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.716965 kubelet[3192]: E1013 05:53:03.716903 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.716965 kubelet[3192]: W1013 05:53:03.716911 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.716965 kubelet[3192]: E1013 05:53:03.716921 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.717192 kubelet[3192]: E1013 05:53:03.717107 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.717192 kubelet[3192]: W1013 05:53:03.717115 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.717192 kubelet[3192]: E1013 05:53:03.717123 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.717392 kubelet[3192]: E1013 05:53:03.717356 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.717392 kubelet[3192]: W1013 05:53:03.717366 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.717392 kubelet[3192]: E1013 05:53:03.717375 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.717909 kubelet[3192]: E1013 05:53:03.717839 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.717909 kubelet[3192]: W1013 05:53:03.717851 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.717909 kubelet[3192]: E1013 05:53:03.717862 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.718140 kubelet[3192]: E1013 05:53:03.718100 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.718140 kubelet[3192]: W1013 05:53:03.718108 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.718140 kubelet[3192]: E1013 05:53:03.718116 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.718376 kubelet[3192]: E1013 05:53:03.718337 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.718376 kubelet[3192]: W1013 05:53:03.718344 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.718376 kubelet[3192]: E1013 05:53:03.718351 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.718722 kubelet[3192]: E1013 05:53:03.718568 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.718722 kubelet[3192]: W1013 05:53:03.718575 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.718722 kubelet[3192]: E1013 05:53:03.718582 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.730976 kubelet[3192]: E1013 05:53:03.730956 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.730976 kubelet[3192]: W1013 05:53:03.730970 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.731073 kubelet[3192]: E1013 05:53:03.730982 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.731152 kubelet[3192]: E1013 05:53:03.731138 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.731212 kubelet[3192]: W1013 05:53:03.731146 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.731212 kubelet[3192]: E1013 05:53:03.731195 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.731373 kubelet[3192]: E1013 05:53:03.731350 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.731373 kubelet[3192]: W1013 05:53:03.731371 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.731428 kubelet[3192]: E1013 05:53:03.731378 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.731579 kubelet[3192]: E1013 05:53:03.731562 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.731579 kubelet[3192]: W1013 05:53:03.731577 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.731637 kubelet[3192]: E1013 05:53:03.731585 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.731736 kubelet[3192]: E1013 05:53:03.731723 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.731736 kubelet[3192]: W1013 05:53:03.731734 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.731795 kubelet[3192]: E1013 05:53:03.731740 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.731879 kubelet[3192]: E1013 05:53:03.731856 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.731879 kubelet[3192]: W1013 05:53:03.731876 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.731943 kubelet[3192]: E1013 05:53:03.731882 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.732048 kubelet[3192]: E1013 05:53:03.732024 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.732048 kubelet[3192]: W1013 05:53:03.732045 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.732093 kubelet[3192]: E1013 05:53:03.732051 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.732286 kubelet[3192]: E1013 05:53:03.732274 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.732286 kubelet[3192]: W1013 05:53:03.732284 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.732353 kubelet[3192]: E1013 05:53:03.732291 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.732403 kubelet[3192]: E1013 05:53:03.732393 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.732403 kubelet[3192]: W1013 05:53:03.732400 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.732458 kubelet[3192]: E1013 05:53:03.732406 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.732524 kubelet[3192]: E1013 05:53:03.732501 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.732524 kubelet[3192]: W1013 05:53:03.732521 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.732582 kubelet[3192]: E1013 05:53:03.732527 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.732668 kubelet[3192]: E1013 05:53:03.732647 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.732668 kubelet[3192]: W1013 05:53:03.732665 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.732726 kubelet[3192]: E1013 05:53:03.732671 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.732787 kubelet[3192]: E1013 05:53:03.732777 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.732787 kubelet[3192]: W1013 05:53:03.732784 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.732842 kubelet[3192]: E1013 05:53:03.732790 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.732982 kubelet[3192]: E1013 05:53:03.732959 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.732982 kubelet[3192]: W1013 05:53:03.732980 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.733033 kubelet[3192]: E1013 05:53:03.732986 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.733301 kubelet[3192]: E1013 05:53:03.733207 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.733301 kubelet[3192]: W1013 05:53:03.733215 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.733301 kubelet[3192]: E1013 05:53:03.733223 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.733396 kubelet[3192]: E1013 05:53:03.733314 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.733396 kubelet[3192]: W1013 05:53:03.733319 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.733396 kubelet[3192]: E1013 05:53:03.733325 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.733479 kubelet[3192]: E1013 05:53:03.733427 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.733479 kubelet[3192]: W1013 05:53:03.733431 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.733479 kubelet[3192]: E1013 05:53:03.733437 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.733730 kubelet[3192]: E1013 05:53:03.733702 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.733730 kubelet[3192]: W1013 05:53:03.733723 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.733802 kubelet[3192]: E1013 05:53:03.733733 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:03.733918 kubelet[3192]: E1013 05:53:03.733893 3192 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:53:03.733918 kubelet[3192]: W1013 05:53:03.733915 3192 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:53:03.733970 kubelet[3192]: E1013 05:53:03.733924 3192 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:53:04.347469 containerd[1726]: time="2025-10-13T05:53:04.347423740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:04.350420 containerd[1726]: time="2025-10-13T05:53:04.350195233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:53:04.354046 containerd[1726]: time="2025-10-13T05:53:04.354009895Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:04.359018 containerd[1726]: time="2025-10-13T05:53:04.358983813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:04.359588 containerd[1726]: time="2025-10-13T05:53:04.359459806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.689091155s" Oct 13 05:53:04.359588 containerd[1726]: time="2025-10-13T05:53:04.359492806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:53:04.372140 containerd[1726]: time="2025-10-13T05:53:04.372107491Z" level=info msg="CreateContainer within sandbox \"db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:53:04.400008 containerd[1726]: time="2025-10-13T05:53:04.398292253Z" level=info msg="Container a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:04.418078 containerd[1726]: time="2025-10-13T05:53:04.418050966Z" level=info msg="CreateContainer within sandbox \"db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b\"" Oct 13 05:53:04.418840 containerd[1726]: time="2025-10-13T05:53:04.418809803Z" level=info msg="StartContainer for \"a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b\"" Oct 13 05:53:04.421133 containerd[1726]: time="2025-10-13T05:53:04.421105419Z" level=info msg="connecting to shim a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b" address="unix:///run/containerd/s/7f815196c74d7ecb7d0f58aa9ea8786c8fcf88926a32c3a776e6482b0caf3b35" protocol=ttrpc version=3 Oct 13 05:53:04.442320 systemd[1]: Started cri-containerd-a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b.scope - libcontainer container a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b. Oct 13 05:53:04.474655 containerd[1726]: time="2025-10-13T05:53:04.474630440Z" level=info msg="StartContainer for \"a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b\" returns successfully" Oct 13 05:53:04.480864 systemd[1]: cri-containerd-a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b.scope: Deactivated successfully. Oct 13 05:53:04.483074 containerd[1726]: time="2025-10-13T05:53:04.483047755Z" level=info msg="received exit event container_id:\"a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b\" id:\"a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b\" pid:3860 exited_at:{seconds:1760334784 nanos:482711754}" Oct 13 05:53:04.483445 containerd[1726]: time="2025-10-13T05:53:04.483423511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b\" id:\"a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b\" pid:3860 exited_at:{seconds:1760334784 nanos:482711754}" Oct 13 05:53:04.501095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4a76e292edeb9dfdab278768603a7a1892b57a3b164c79a3322f4d2d9cffd2b-rootfs.mount: Deactivated successfully. Oct 13 05:53:04.613620 kubelet[3192]: E1013 05:53:04.613484 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2nwjz" podUID="92fa068c-877d-4748-8370-0fa59cfeb840" Oct 13 05:53:04.687681 kubelet[3192]: I1013 05:53:04.687658 3192 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:53:06.611434 kubelet[3192]: E1013 05:53:06.611370 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2nwjz" podUID="92fa068c-877d-4748-8370-0fa59cfeb840" Oct 13 05:53:06.718026 kubelet[3192]: I1013 05:53:06.717889 3192 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:53:07.696227 containerd[1726]: time="2025-10-13T05:53:07.695048577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:53:08.610869 kubelet[3192]: E1013 05:53:08.610794 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2nwjz" podUID="92fa068c-877d-4748-8370-0fa59cfeb840" Oct 13 05:53:10.040805 containerd[1726]: time="2025-10-13T05:53:10.040763441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:10.043568 containerd[1726]: time="2025-10-13T05:53:10.043535860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:53:10.046831 containerd[1726]: time="2025-10-13T05:53:10.046771394Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:10.050341 containerd[1726]: time="2025-10-13T05:53:10.050298037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:10.050840 containerd[1726]: time="2025-10-13T05:53:10.050782695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.355418776s" Oct 13 05:53:10.050840 containerd[1726]: time="2025-10-13T05:53:10.050815116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:53:10.058314 containerd[1726]: time="2025-10-13T05:53:10.058280421Z" level=info msg="CreateContainer within sandbox \"db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:53:10.092093 containerd[1726]: time="2025-10-13T05:53:10.090882602Z" level=info msg="Container 84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:10.106824 containerd[1726]: time="2025-10-13T05:53:10.106799152Z" level=info msg="CreateContainer within sandbox \"db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95\"" Oct 13 05:53:10.107213 containerd[1726]: time="2025-10-13T05:53:10.107139445Z" level=info msg="StartContainer for \"84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95\"" Oct 13 05:53:10.108675 containerd[1726]: time="2025-10-13T05:53:10.108638753Z" level=info msg="connecting to shim 84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95" address="unix:///run/containerd/s/7f815196c74d7ecb7d0f58aa9ea8786c8fcf88926a32c3a776e6482b0caf3b35" protocol=ttrpc version=3 Oct 13 05:53:10.130329 systemd[1]: Started cri-containerd-84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95.scope - libcontainer container 84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95. Oct 13 05:53:10.163979 containerd[1726]: time="2025-10-13T05:53:10.163906178Z" level=info msg="StartContainer for \"84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95\" returns successfully" Oct 13 05:53:10.611188 kubelet[3192]: E1013 05:53:10.611106 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2nwjz" podUID="92fa068c-877d-4748-8370-0fa59cfeb840" Oct 13 05:53:11.380461 systemd[1]: cri-containerd-84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95.scope: Deactivated successfully. Oct 13 05:53:11.380729 systemd[1]: cri-containerd-84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95.scope: Consumed 412ms CPU time, 191.5M memory peak, 171.3M written to disk. Oct 13 05:53:11.381295 containerd[1726]: time="2025-10-13T05:53:11.381245585Z" level=info msg="received exit event container_id:\"84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95\" id:\"84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95\" pid:3926 exited_at:{seconds:1760334791 nanos:380338720}" Oct 13 05:53:11.381935 containerd[1726]: time="2025-10-13T05:53:11.381448793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95\" id:\"84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95\" pid:3926 exited_at:{seconds:1760334791 nanos:380338720}" Oct 13 05:53:11.400272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84fb13fd9523c9c87a30ebe50f04a0d29d2dcb15474a3b565a50f2e22ec86e95-rootfs.mount: Deactivated successfully. Oct 13 05:53:11.408016 kubelet[3192]: I1013 05:53:11.407120 3192 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:53:11.677590 kubelet[3192]: I1013 05:53:11.677472 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c879da08-5ee1-49ec-9e18-c510bd820afb-config-volume\") pod \"coredns-674b8bbfcf-rncrz\" (UID: \"c879da08-5ee1-49ec-9e18-c510bd820afb\") " pod="kube-system/coredns-674b8bbfcf-rncrz" Oct 13 05:53:11.677590 kubelet[3192]: I1013 05:53:11.677510 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9r5\" (UniqueName: \"kubernetes.io/projected/c879da08-5ee1-49ec-9e18-c510bd820afb-kube-api-access-2p9r5\") pod \"coredns-674b8bbfcf-rncrz\" (UID: \"c879da08-5ee1-49ec-9e18-c510bd820afb\") " pod="kube-system/coredns-674b8bbfcf-rncrz" Oct 13 05:53:11.778523 kubelet[3192]: E1013 05:53:11.777876 3192 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered Oct 13 05:53:11.778523 kubelet[3192]: E1013 05:53:11.777950 3192 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c879da08-5ee1-49ec-9e18-c510bd820afb-config-volume podName:c879da08-5ee1-49ec-9e18-c510bd820afb nodeName:}" failed. No retries permitted until 2025-10-13 05:53:12.277929911 +0000 UTC m=+29.764700419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c879da08-5ee1-49ec-9e18-c510bd820afb-config-volume") pod "coredns-674b8bbfcf-rncrz" (UID: "c879da08-5ee1-49ec-9e18-c510bd820afb") : object "kube-system"/"coredns" not registered Oct 13 05:53:11.914538 kubelet[3192]: I1013 05:53:11.878751 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be696de5-4e39-46ea-a28d-3409bd9514e0-tigera-ca-bundle\") pod \"calico-kube-controllers-64bc8cdbdb-kghw9\" (UID: \"be696de5-4e39-46ea-a28d-3409bd9514e0\") " pod="calico-system/calico-kube-controllers-64bc8cdbdb-kghw9" Oct 13 05:53:11.914538 kubelet[3192]: I1013 05:53:11.878783 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdnxl\" (UniqueName: \"kubernetes.io/projected/be696de5-4e39-46ea-a28d-3409bd9514e0-kube-api-access-pdnxl\") pod \"calico-kube-controllers-64bc8cdbdb-kghw9\" (UID: \"be696de5-4e39-46ea-a28d-3409bd9514e0\") " pod="calico-system/calico-kube-controllers-64bc8cdbdb-kghw9" Oct 13 05:53:11.783408 systemd[1]: Created slice kubepods-besteffort-podbe696de5_4e39_46ea_a28d_3409bd9514e0.slice - libcontainer container kubepods-besteffort-podbe696de5_4e39_46ea_a28d_3409bd9514e0.slice. Oct 13 05:53:12.078337 systemd[1]: Created slice kubepods-burstable-podc879da08_5ee1_49ec_9e18_c510bd820afb.slice - libcontainer container kubepods-burstable-podc879da08_5ee1_49ec_9e18_c510bd820afb.slice. Oct 13 05:53:12.180859 kubelet[3192]: I1013 05:53:12.180816 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq9fg\" (UniqueName: \"kubernetes.io/projected/846c13f8-9e67-46e6-b031-cc0dfeca9023-kube-api-access-dq9fg\") pod \"coredns-674b8bbfcf-844nx\" (UID: \"846c13f8-9e67-46e6-b031-cc0dfeca9023\") " pod="kube-system/coredns-674b8bbfcf-844nx" Oct 13 05:53:12.180859 kubelet[3192]: I1013 05:53:12.180860 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/846c13f8-9e67-46e6-b031-cc0dfeca9023-config-volume\") pod \"coredns-674b8bbfcf-844nx\" (UID: \"846c13f8-9e67-46e6-b031-cc0dfeca9023\") " pod="kube-system/coredns-674b8bbfcf-844nx" Oct 13 05:53:12.218931 containerd[1726]: time="2025-10-13T05:53:12.218881288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64bc8cdbdb-kghw9,Uid:be696de5-4e39-46ea-a28d-3409bd9514e0,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:12.283284 kubelet[3192]: I1013 05:53:12.282965 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e795c0a0-c779-43f5-a016-8498524e219f-whisker-backend-key-pair\") pod \"whisker-9d9df566-jgmdp\" (UID: \"e795c0a0-c779-43f5-a016-8498524e219f\") " pod="calico-system/whisker-9d9df566-jgmdp" Oct 13 05:53:12.283574 kubelet[3192]: I1013 05:53:12.283421 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e795c0a0-c779-43f5-a016-8498524e219f-whisker-ca-bundle\") pod \"whisker-9d9df566-jgmdp\" (UID: \"e795c0a0-c779-43f5-a016-8498524e219f\") " pod="calico-system/whisker-9d9df566-jgmdp" Oct 13 05:53:12.283696 kubelet[3192]: I1013 05:53:12.283594 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvwb5\" (UniqueName: \"kubernetes.io/projected/e795c0a0-c779-43f5-a016-8498524e219f-kube-api-access-tvwb5\") pod \"whisker-9d9df566-jgmdp\" (UID: \"e795c0a0-c779-43f5-a016-8498524e219f\") " pod="calico-system/whisker-9d9df566-jgmdp" Oct 13 05:53:12.309107 systemd[1]: Created slice kubepods-burstable-pod846c13f8_9e67_46e6_b031_cc0dfeca9023.slice - libcontainer container kubepods-burstable-pod846c13f8_9e67_46e6_b031_cc0dfeca9023.slice. Oct 13 05:53:12.324137 containerd[1726]: time="2025-10-13T05:53:12.324091885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-844nx,Uid:846c13f8-9e67-46e6-b031-cc0dfeca9023,Namespace:kube-system,Attempt:0,}" Oct 13 05:53:12.327216 systemd[1]: Created slice kubepods-besteffort-pode795c0a0_c779_43f5_a016_8498524e219f.slice - libcontainer container kubepods-besteffort-pode795c0a0_c779_43f5_a016_8498524e219f.slice. Oct 13 05:53:12.339404 systemd[1]: Created slice kubepods-besteffort-pod36ee441e_a553_45a0_b9b4_51e7dca487a0.slice - libcontainer container kubepods-besteffort-pod36ee441e_a553_45a0_b9b4_51e7dca487a0.slice. Oct 13 05:53:12.350973 systemd[1]: Created slice kubepods-besteffort-pod2b69a0e3_c65e_46e1_9bc8_e5fa93bccd7d.slice - libcontainer container kubepods-besteffort-pod2b69a0e3_c65e_46e1_9bc8_e5fa93bccd7d.slice. Oct 13 05:53:12.360500 systemd[1]: Created slice kubepods-besteffort-pode6bc447c_28f5_40f9_89ab_682202b1d25c.slice - libcontainer container kubepods-besteffort-pode6bc447c_28f5_40f9_89ab_682202b1d25c.slice. Oct 13 05:53:12.383478 containerd[1726]: time="2025-10-13T05:53:12.382954305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rncrz,Uid:c879da08-5ee1-49ec-9e18-c510bd820afb,Namespace:kube-system,Attempt:0,}" Oct 13 05:53:12.384849 kubelet[3192]: I1013 05:53:12.384644 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wvd\" (UniqueName: \"kubernetes.io/projected/2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d-kube-api-access-b8wvd\") pod \"calico-apiserver-7fcd66766-ctwdl\" (UID: \"2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d\") " pod="calico-apiserver/calico-apiserver-7fcd66766-ctwdl" Oct 13 05:53:12.384849 kubelet[3192]: I1013 05:53:12.384696 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2fsp\" (UniqueName: \"kubernetes.io/projected/e6bc447c-28f5-40f9-89ab-682202b1d25c-kube-api-access-v2fsp\") pod \"goldmane-54d579b49d-tm8wl\" (UID: \"e6bc447c-28f5-40f9-89ab-682202b1d25c\") " pod="calico-system/goldmane-54d579b49d-tm8wl" Oct 13 05:53:12.384849 kubelet[3192]: I1013 05:53:12.384717 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d-calico-apiserver-certs\") pod \"calico-apiserver-7fcd66766-ctwdl\" (UID: \"2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d\") " pod="calico-apiserver/calico-apiserver-7fcd66766-ctwdl" Oct 13 05:53:12.384849 kubelet[3192]: I1013 05:53:12.384749 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6bc447c-28f5-40f9-89ab-682202b1d25c-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-tm8wl\" (UID: \"e6bc447c-28f5-40f9-89ab-682202b1d25c\") " pod="calico-system/goldmane-54d579b49d-tm8wl" Oct 13 05:53:12.384849 kubelet[3192]: I1013 05:53:12.384783 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6bc447c-28f5-40f9-89ab-682202b1d25c-config\") pod \"goldmane-54d579b49d-tm8wl\" (UID: \"e6bc447c-28f5-40f9-89ab-682202b1d25c\") " pod="calico-system/goldmane-54d579b49d-tm8wl" Oct 13 05:53:12.385037 kubelet[3192]: I1013 05:53:12.384826 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e6bc447c-28f5-40f9-89ab-682202b1d25c-goldmane-key-pair\") pod \"goldmane-54d579b49d-tm8wl\" (UID: \"e6bc447c-28f5-40f9-89ab-682202b1d25c\") " pod="calico-system/goldmane-54d579b49d-tm8wl" Oct 13 05:53:12.385037 kubelet[3192]: I1013 05:53:12.384869 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36ee441e-a553-45a0-b9b4-51e7dca487a0-calico-apiserver-certs\") pod \"calico-apiserver-7fcd66766-4rf9v\" (UID: \"36ee441e-a553-45a0-b9b4-51e7dca487a0\") " pod="calico-apiserver/calico-apiserver-7fcd66766-4rf9v" Oct 13 05:53:12.385037 kubelet[3192]: I1013 05:53:12.384889 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flb2b\" (UniqueName: \"kubernetes.io/projected/36ee441e-a553-45a0-b9b4-51e7dca487a0-kube-api-access-flb2b\") pod \"calico-apiserver-7fcd66766-4rf9v\" (UID: \"36ee441e-a553-45a0-b9b4-51e7dca487a0\") " pod="calico-apiserver/calico-apiserver-7fcd66766-4rf9v" Oct 13 05:53:12.417028 containerd[1726]: time="2025-10-13T05:53:12.416581176Z" level=error msg="Failed to destroy network for sandbox \"6779af3938d6f1ec402b7e305d82c8834b9d64dc5bfd60273b66bbc007b38288\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.421140 systemd[1]: run-netns-cni\x2dd9f88ddb\x2dad1a\x2ded08\x2d221a\x2d4dad2644eda1.mount: Deactivated successfully. Oct 13 05:53:12.421484 containerd[1726]: time="2025-10-13T05:53:12.421297417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64bc8cdbdb-kghw9,Uid:be696de5-4e39-46ea-a28d-3409bd9514e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6779af3938d6f1ec402b7e305d82c8834b9d64dc5bfd60273b66bbc007b38288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.421829 kubelet[3192]: E1013 05:53:12.421796 3192 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6779af3938d6f1ec402b7e305d82c8834b9d64dc5bfd60273b66bbc007b38288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.421884 kubelet[3192]: E1013 05:53:12.421860 3192 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6779af3938d6f1ec402b7e305d82c8834b9d64dc5bfd60273b66bbc007b38288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64bc8cdbdb-kghw9" Oct 13 05:53:12.421908 kubelet[3192]: E1013 05:53:12.421881 3192 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6779af3938d6f1ec402b7e305d82c8834b9d64dc5bfd60273b66bbc007b38288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64bc8cdbdb-kghw9" Oct 13 05:53:12.424233 kubelet[3192]: E1013 05:53:12.422954 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64bc8cdbdb-kghw9_calico-system(be696de5-4e39-46ea-a28d-3409bd9514e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64bc8cdbdb-kghw9_calico-system(be696de5-4e39-46ea-a28d-3409bd9514e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6779af3938d6f1ec402b7e305d82c8834b9d64dc5bfd60273b66bbc007b38288\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64bc8cdbdb-kghw9" podUID="be696de5-4e39-46ea-a28d-3409bd9514e0" Oct 13 05:53:12.447090 containerd[1726]: time="2025-10-13T05:53:12.447050454Z" level=error msg="Failed to destroy network for sandbox \"72ec4916743e9ea8d386bd2015af1276e9c8b92f83563ae30f106ce185cc253b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.448535 systemd[1]: run-netns-cni\x2db762992c\x2d66c0\x2d2184\x2d9bb6\x2dc2ce1399f56b.mount: Deactivated successfully. Oct 13 05:53:12.452409 containerd[1726]: time="2025-10-13T05:53:12.452268644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-844nx,Uid:846c13f8-9e67-46e6-b031-cc0dfeca9023,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ec4916743e9ea8d386bd2015af1276e9c8b92f83563ae30f106ce185cc253b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.452574 kubelet[3192]: E1013 05:53:12.452522 3192 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ec4916743e9ea8d386bd2015af1276e9c8b92f83563ae30f106ce185cc253b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.452626 kubelet[3192]: E1013 05:53:12.452598 3192 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ec4916743e9ea8d386bd2015af1276e9c8b92f83563ae30f106ce185cc253b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-844nx" Oct 13 05:53:12.452652 kubelet[3192]: E1013 05:53:12.452634 3192 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ec4916743e9ea8d386bd2015af1276e9c8b92f83563ae30f106ce185cc253b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-844nx" Oct 13 05:53:12.452811 kubelet[3192]: E1013 05:53:12.452786 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-844nx_kube-system(846c13f8-9e67-46e6-b031-cc0dfeca9023)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-844nx_kube-system(846c13f8-9e67-46e6-b031-cc0dfeca9023)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72ec4916743e9ea8d386bd2015af1276e9c8b92f83563ae30f106ce185cc253b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-844nx" podUID="846c13f8-9e67-46e6-b031-cc0dfeca9023" Oct 13 05:53:12.467430 containerd[1726]: time="2025-10-13T05:53:12.467398894Z" level=error msg="Failed to destroy network for sandbox \"01db9da74a8cd9114a209c499b777a9cfae9665dcccb09c47ef6f51101314ba7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.468871 systemd[1]: run-netns-cni\x2d4a7d9042\x2dfe4b\x2d703a\x2d4b53\x2d299708a65b14.mount: Deactivated successfully. Oct 13 05:53:12.471663 containerd[1726]: time="2025-10-13T05:53:12.471509272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rncrz,Uid:c879da08-5ee1-49ec-9e18-c510bd820afb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01db9da74a8cd9114a209c499b777a9cfae9665dcccb09c47ef6f51101314ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.472279 kubelet[3192]: E1013 05:53:12.472241 3192 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01db9da74a8cd9114a209c499b777a9cfae9665dcccb09c47ef6f51101314ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.472446 kubelet[3192]: E1013 05:53:12.472300 3192 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01db9da74a8cd9114a209c499b777a9cfae9665dcccb09c47ef6f51101314ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rncrz" Oct 13 05:53:12.472446 kubelet[3192]: E1013 05:53:12.472324 3192 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01db9da74a8cd9114a209c499b777a9cfae9665dcccb09c47ef6f51101314ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rncrz" Oct 13 05:53:12.472446 kubelet[3192]: E1013 05:53:12.472381 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rncrz_kube-system(c879da08-5ee1-49ec-9e18-c510bd820afb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rncrz_kube-system(c879da08-5ee1-49ec-9e18-c510bd820afb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01db9da74a8cd9114a209c499b777a9cfae9665dcccb09c47ef6f51101314ba7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rncrz" podUID="c879da08-5ee1-49ec-9e18-c510bd820afb" Oct 13 05:53:12.616618 systemd[1]: Created slice kubepods-besteffort-pod92fa068c_877d_4748_8370_0fa59cfeb840.slice - libcontainer container kubepods-besteffort-pod92fa068c_877d_4748_8370_0fa59cfeb840.slice. Oct 13 05:53:12.619466 containerd[1726]: time="2025-10-13T05:53:12.619430923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2nwjz,Uid:92fa068c-877d-4748-8370-0fa59cfeb840,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:12.639156 containerd[1726]: time="2025-10-13T05:53:12.639108279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9d9df566-jgmdp,Uid:e795c0a0-c779-43f5-a016-8498524e219f,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:12.646013 containerd[1726]: time="2025-10-13T05:53:12.645783267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd66766-4rf9v,Uid:36ee441e-a553-45a0-b9b4-51e7dca487a0,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:53:12.655865 containerd[1726]: time="2025-10-13T05:53:12.655840154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd66766-ctwdl,Uid:2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:53:12.665237 containerd[1726]: time="2025-10-13T05:53:12.665073327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-tm8wl,Uid:e6bc447c-28f5-40f9-89ab-682202b1d25c,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:12.705743 containerd[1726]: time="2025-10-13T05:53:12.705705523Z" level=error msg="Failed to destroy network for sandbox \"6f62841b627535bce36a56dad651a2e8f7833b8fe5252965d447296ff8e45117\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.709142 containerd[1726]: time="2025-10-13T05:53:12.709102167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2nwjz,Uid:92fa068c-877d-4748-8370-0fa59cfeb840,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f62841b627535bce36a56dad651a2e8f7833b8fe5252965d447296ff8e45117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.709524 kubelet[3192]: E1013 05:53:12.709434 3192 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f62841b627535bce36a56dad651a2e8f7833b8fe5252965d447296ff8e45117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.709524 kubelet[3192]: E1013 05:53:12.709485 3192 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f62841b627535bce36a56dad651a2e8f7833b8fe5252965d447296ff8e45117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2nwjz" Oct 13 05:53:12.709524 kubelet[3192]: E1013 05:53:12.709510 3192 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f62841b627535bce36a56dad651a2e8f7833b8fe5252965d447296ff8e45117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2nwjz" Oct 13 05:53:12.709862 kubelet[3192]: E1013 05:53:12.709558 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2nwjz_calico-system(92fa068c-877d-4748-8370-0fa59cfeb840)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2nwjz_calico-system(92fa068c-877d-4748-8370-0fa59cfeb840)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f62841b627535bce36a56dad651a2e8f7833b8fe5252965d447296ff8e45117\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2nwjz" podUID="92fa068c-877d-4748-8370-0fa59cfeb840" Oct 13 05:53:12.721352 containerd[1726]: time="2025-10-13T05:53:12.721308924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:53:12.777382 containerd[1726]: time="2025-10-13T05:53:12.777282953Z" level=error msg="Failed to destroy network for sandbox \"531220ba7985c665ccda85f080321761d99ae27b60d70c8eb5f3255d8054d3fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.783341 containerd[1726]: time="2025-10-13T05:53:12.783022664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9d9df566-jgmdp,Uid:e795c0a0-c779-43f5-a016-8498524e219f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"531220ba7985c665ccda85f080321761d99ae27b60d70c8eb5f3255d8054d3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.783656 kubelet[3192]: E1013 05:53:12.783603 3192 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531220ba7985c665ccda85f080321761d99ae27b60d70c8eb5f3255d8054d3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.783720 kubelet[3192]: E1013 05:53:12.783678 3192 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531220ba7985c665ccda85f080321761d99ae27b60d70c8eb5f3255d8054d3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9d9df566-jgmdp" Oct 13 05:53:12.783720 kubelet[3192]: E1013 05:53:12.783701 3192 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531220ba7985c665ccda85f080321761d99ae27b60d70c8eb5f3255d8054d3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9d9df566-jgmdp" Oct 13 05:53:12.783870 kubelet[3192]: E1013 05:53:12.783843 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9d9df566-jgmdp_calico-system(e795c0a0-c779-43f5-a016-8498524e219f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9d9df566-jgmdp_calico-system(e795c0a0-c779-43f5-a016-8498524e219f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"531220ba7985c665ccda85f080321761d99ae27b60d70c8eb5f3255d8054d3fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9d9df566-jgmdp" podUID="e795c0a0-c779-43f5-a016-8498524e219f" Oct 13 05:53:12.809399 containerd[1726]: time="2025-10-13T05:53:12.809235117Z" level=error msg="Failed to destroy network for sandbox \"ff03c6202a8bf47fbd49ba5310416e1de9f6fd2e5e4849ff6f45765a38af6318\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.812586 containerd[1726]: time="2025-10-13T05:53:12.812526345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd66766-4rf9v,Uid:36ee441e-a553-45a0-b9b4-51e7dca487a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff03c6202a8bf47fbd49ba5310416e1de9f6fd2e5e4849ff6f45765a38af6318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.812875 kubelet[3192]: E1013 05:53:12.812849 3192 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff03c6202a8bf47fbd49ba5310416e1de9f6fd2e5e4849ff6f45765a38af6318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.812965 kubelet[3192]: E1013 05:53:12.812894 3192 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff03c6202a8bf47fbd49ba5310416e1de9f6fd2e5e4849ff6f45765a38af6318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd66766-4rf9v" Oct 13 05:53:12.812965 kubelet[3192]: E1013 05:53:12.812926 3192 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff03c6202a8bf47fbd49ba5310416e1de9f6fd2e5e4849ff6f45765a38af6318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd66766-4rf9v" Oct 13 05:53:12.813061 kubelet[3192]: E1013 05:53:12.812972 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd66766-4rf9v_calico-apiserver(36ee441e-a553-45a0-b9b4-51e7dca487a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd66766-4rf9v_calico-apiserver(36ee441e-a553-45a0-b9b4-51e7dca487a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff03c6202a8bf47fbd49ba5310416e1de9f6fd2e5e4849ff6f45765a38af6318\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd66766-4rf9v" podUID="36ee441e-a553-45a0-b9b4-51e7dca487a0" Oct 13 05:53:12.815602 containerd[1726]: time="2025-10-13T05:53:12.815556199Z" level=error msg="Failed to destroy network for sandbox \"c1a8bee1e7e3a8f5394e1583ef04aff71221a6eba34af7895698889b086f77ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.818900 containerd[1726]: time="2025-10-13T05:53:12.818859136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-tm8wl,Uid:e6bc447c-28f5-40f9-89ab-682202b1d25c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a8bee1e7e3a8f5394e1583ef04aff71221a6eba34af7895698889b086f77ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.819360 kubelet[3192]: E1013 05:53:12.819263 3192 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a8bee1e7e3a8f5394e1583ef04aff71221a6eba34af7895698889b086f77ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.819562 kubelet[3192]: E1013 05:53:12.819344 3192 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a8bee1e7e3a8f5394e1583ef04aff71221a6eba34af7895698889b086f77ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-tm8wl" Oct 13 05:53:12.819562 kubelet[3192]: E1013 05:53:12.819484 3192 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a8bee1e7e3a8f5394e1583ef04aff71221a6eba34af7895698889b086f77ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-tm8wl" Oct 13 05:53:12.819713 kubelet[3192]: E1013 05:53:12.819635 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-tm8wl_calico-system(e6bc447c-28f5-40f9-89ab-682202b1d25c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-tm8wl_calico-system(e6bc447c-28f5-40f9-89ab-682202b1d25c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1a8bee1e7e3a8f5394e1583ef04aff71221a6eba34af7895698889b086f77ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-tm8wl" podUID="e6bc447c-28f5-40f9-89ab-682202b1d25c" Oct 13 05:53:12.821716 containerd[1726]: time="2025-10-13T05:53:12.821693812Z" level=error msg="Failed to destroy network for sandbox \"686ba5ede83c762f8adaddc6ba402eb481e062e2cb9b705bee3e26944848f805\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.824766 containerd[1726]: time="2025-10-13T05:53:12.824736680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd66766-ctwdl,Uid:2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"686ba5ede83c762f8adaddc6ba402eb481e062e2cb9b705bee3e26944848f805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.825199 kubelet[3192]: E1013 05:53:12.824884 3192 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686ba5ede83c762f8adaddc6ba402eb481e062e2cb9b705bee3e26944848f805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:53:12.825199 kubelet[3192]: E1013 05:53:12.824916 3192 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686ba5ede83c762f8adaddc6ba402eb481e062e2cb9b705bee3e26944848f805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd66766-ctwdl" Oct 13 05:53:12.825199 kubelet[3192]: E1013 05:53:12.824931 3192 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686ba5ede83c762f8adaddc6ba402eb481e062e2cb9b705bee3e26944848f805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd66766-ctwdl" Oct 13 05:53:12.825318 kubelet[3192]: E1013 05:53:12.824979 3192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd66766-ctwdl_calico-apiserver(2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd66766-ctwdl_calico-apiserver(2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"686ba5ede83c762f8adaddc6ba402eb481e062e2cb9b705bee3e26944848f805\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd66766-ctwdl" podUID="2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d" Oct 13 05:53:17.256362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942353432.mount: Deactivated successfully. Oct 13 05:53:17.289953 containerd[1726]: time="2025-10-13T05:53:17.289907337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:17.292824 containerd[1726]: time="2025-10-13T05:53:17.292789688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:53:17.296910 containerd[1726]: time="2025-10-13T05:53:17.296858526Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:17.300928 containerd[1726]: time="2025-10-13T05:53:17.300387445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:17.300928 containerd[1726]: time="2025-10-13T05:53:17.300794671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.579451203s" Oct 13 05:53:17.300928 containerd[1726]: time="2025-10-13T05:53:17.300821623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:53:17.314438 containerd[1726]: time="2025-10-13T05:53:17.314414438Z" level=info msg="CreateContainer within sandbox \"db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:53:17.340131 containerd[1726]: time="2025-10-13T05:53:17.340105340Z" level=info msg="Container 6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:17.346266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177057405.mount: Deactivated successfully. Oct 13 05:53:17.360509 containerd[1726]: time="2025-10-13T05:53:17.360461317Z" level=info msg="CreateContainer within sandbox \"db067292c35c3ef5df0f9acd5ebcdabea23c6b564b829848a03e2795b5403f59\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\"" Oct 13 05:53:17.361036 containerd[1726]: time="2025-10-13T05:53:17.360918632Z" level=info msg="StartContainer for \"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\"" Oct 13 05:53:17.362710 containerd[1726]: time="2025-10-13T05:53:17.362682177Z" level=info msg="connecting to shim 6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e" address="unix:///run/containerd/s/7f815196c74d7ecb7d0f58aa9ea8786c8fcf88926a32c3a776e6482b0caf3b35" protocol=ttrpc version=3 Oct 13 05:53:17.382346 systemd[1]: Started cri-containerd-6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e.scope - libcontainer container 6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e. Oct 13 05:53:17.420226 containerd[1726]: time="2025-10-13T05:53:17.420152832Z" level=info msg="StartContainer for \"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\" returns successfully" Oct 13 05:53:17.730020 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:53:17.730129 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:53:17.763207 kubelet[3192]: I1013 05:53:17.763119 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qczxh" podStartSLOduration=0.996507302 podStartE2EDuration="17.763105521s" podCreationTimestamp="2025-10-13 05:53:00 +0000 UTC" firstStartedPulling="2025-10-13 05:53:00.535070441 +0000 UTC m=+18.021840961" lastFinishedPulling="2025-10-13 05:53:17.301668666 +0000 UTC m=+34.788439180" observedRunningTime="2025-10-13 05:53:17.762861425 +0000 UTC m=+35.249631951" watchObservedRunningTime="2025-10-13 05:53:17.763105521 +0000 UTC m=+35.249876040" Oct 13 05:53:17.882638 containerd[1726]: time="2025-10-13T05:53:17.882596763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\" id:\"edd55df9c7f1612ea8537cc1f311aeb63de0469bcfe6d6d16f148541a96f1cab\" pid:4245 exit_status:1 exited_at:{seconds:1760334797 nanos:882295929}" Oct 13 05:53:17.922187 kubelet[3192]: I1013 05:53:17.921836 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e795c0a0-c779-43f5-a016-8498524e219f-whisker-backend-key-pair\") pod \"e795c0a0-c779-43f5-a016-8498524e219f\" (UID: \"e795c0a0-c779-43f5-a016-8498524e219f\") " Oct 13 05:53:17.922187 kubelet[3192]: I1013 05:53:17.921873 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e795c0a0-c779-43f5-a016-8498524e219f-whisker-ca-bundle\") pod \"e795c0a0-c779-43f5-a016-8498524e219f\" (UID: \"e795c0a0-c779-43f5-a016-8498524e219f\") " Oct 13 05:53:17.922187 kubelet[3192]: I1013 05:53:17.921904 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvwb5\" (UniqueName: \"kubernetes.io/projected/e795c0a0-c779-43f5-a016-8498524e219f-kube-api-access-tvwb5\") pod \"e795c0a0-c779-43f5-a016-8498524e219f\" (UID: \"e795c0a0-c779-43f5-a016-8498524e219f\") " Oct 13 05:53:17.924338 kubelet[3192]: I1013 05:53:17.924308 3192 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e795c0a0-c779-43f5-a016-8498524e219f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e795c0a0-c779-43f5-a016-8498524e219f" (UID: "e795c0a0-c779-43f5-a016-8498524e219f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:53:17.926107 kubelet[3192]: I1013 05:53:17.926068 3192 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e795c0a0-c779-43f5-a016-8498524e219f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e795c0a0-c779-43f5-a016-8498524e219f" (UID: "e795c0a0-c779-43f5-a016-8498524e219f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:53:17.928839 kubelet[3192]: I1013 05:53:17.928799 3192 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e795c0a0-c779-43f5-a016-8498524e219f-kube-api-access-tvwb5" (OuterVolumeSpecName: "kube-api-access-tvwb5") pod "e795c0a0-c779-43f5-a016-8498524e219f" (UID: "e795c0a0-c779-43f5-a016-8498524e219f"). InnerVolumeSpecName "kube-api-access-tvwb5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:53:18.023232 kubelet[3192]: I1013 05:53:18.023020 3192 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tvwb5\" (UniqueName: \"kubernetes.io/projected/e795c0a0-c779-43f5-a016-8498524e219f-kube-api-access-tvwb5\") on node \"ci-4459.1.0-a-4938a72943\" DevicePath \"\"" Oct 13 05:53:18.023457 kubelet[3192]: I1013 05:53:18.023380 3192 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e795c0a0-c779-43f5-a016-8498524e219f-whisker-backend-key-pair\") on node \"ci-4459.1.0-a-4938a72943\" DevicePath \"\"" Oct 13 05:53:18.023457 kubelet[3192]: I1013 05:53:18.023409 3192 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e795c0a0-c779-43f5-a016-8498524e219f-whisker-ca-bundle\") on node \"ci-4459.1.0-a-4938a72943\" DevicePath \"\"" Oct 13 05:53:18.255473 systemd[1]: var-lib-kubelet-pods-e795c0a0\x2dc779\x2d43f5\x2da016\x2d8498524e219f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtvwb5.mount: Deactivated successfully. Oct 13 05:53:18.255555 systemd[1]: var-lib-kubelet-pods-e795c0a0\x2dc779\x2d43f5\x2da016\x2d8498524e219f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:53:18.616034 systemd[1]: Removed slice kubepods-besteffort-pode795c0a0_c779_43f5_a016_8498524e219f.slice - libcontainer container kubepods-besteffort-pode795c0a0_c779_43f5_a016_8498524e219f.slice. Oct 13 05:53:18.815634 systemd[1]: Created slice kubepods-besteffort-pod7a91b6ae_3294_4159_9cb2_1253e7adce6c.slice - libcontainer container kubepods-besteffort-pod7a91b6ae_3294_4159_9cb2_1253e7adce6c.slice. Oct 13 05:53:18.827140 kubelet[3192]: I1013 05:53:18.827108 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a91b6ae-3294-4159-9cb2-1253e7adce6c-whisker-backend-key-pair\") pod \"whisker-76f598fd89-lx5vm\" (UID: \"7a91b6ae-3294-4159-9cb2-1253e7adce6c\") " pod="calico-system/whisker-76f598fd89-lx5vm" Oct 13 05:53:18.827937 kubelet[3192]: I1013 05:53:18.827900 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a91b6ae-3294-4159-9cb2-1253e7adce6c-whisker-ca-bundle\") pod \"whisker-76f598fd89-lx5vm\" (UID: \"7a91b6ae-3294-4159-9cb2-1253e7adce6c\") " pod="calico-system/whisker-76f598fd89-lx5vm" Oct 13 05:53:18.829466 kubelet[3192]: I1013 05:53:18.829405 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj688\" (UniqueName: \"kubernetes.io/projected/7a91b6ae-3294-4159-9cb2-1253e7adce6c-kube-api-access-mj688\") pod \"whisker-76f598fd89-lx5vm\" (UID: \"7a91b6ae-3294-4159-9cb2-1253e7adce6c\") " pod="calico-system/whisker-76f598fd89-lx5vm" Oct 13 05:53:18.844234 containerd[1726]: time="2025-10-13T05:53:18.844164172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\" id:\"9196d8c6da2a34b619da52934fbde2a261ae191f876db83964acb75f3b99199d\" pid:4292 exit_status:1 exited_at:{seconds:1760334798 nanos:843787232}" Oct 13 05:53:19.120846 containerd[1726]: time="2025-10-13T05:53:19.120805336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76f598fd89-lx5vm,Uid:7a91b6ae-3294-4159-9cb2-1253e7adce6c,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:19.243740 systemd-networkd[1352]: calicf6ba9f636a: Link UP Oct 13 05:53:19.244477 systemd-networkd[1352]: calicf6ba9f636a: Gained carrier Oct 13 05:53:19.268448 containerd[1726]: 2025-10-13 05:53:19.148 [INFO][4310] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:53:19.268448 containerd[1726]: 2025-10-13 05:53:19.157 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0 whisker-76f598fd89- calico-system 7a91b6ae-3294-4159-9cb2-1253e7adce6c 885 0 2025-10-13 05:53:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76f598fd89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-a-4938a72943 whisker-76f598fd89-lx5vm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicf6ba9f636a [] [] }} ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Namespace="calico-system" Pod="whisker-76f598fd89-lx5vm" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-" Oct 13 05:53:19.268448 containerd[1726]: 2025-10-13 05:53:19.157 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Namespace="calico-system" Pod="whisker-76f598fd89-lx5vm" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" Oct 13 05:53:19.268448 containerd[1726]: 2025-10-13 05:53:19.177 [INFO][4321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" HandleID="k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Workload="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.177 [INFO][4321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" HandleID="k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Workload="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-a-4938a72943", "pod":"whisker-76f598fd89-lx5vm", "timestamp":"2025-10-13 05:53:19.17781405 +0000 UTC"}, Hostname:"ci-4459.1.0-a-4938a72943", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.177 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.178 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.178 [INFO][4321] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-4938a72943' Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.182 [INFO][4321] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.186 [INFO][4321] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.191 [INFO][4321] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.192 [INFO][4321] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268681 containerd[1726]: 2025-10-13 05:53:19.193 [INFO][4321] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268905 containerd[1726]: 2025-10-13 05:53:19.193 [INFO][4321] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268905 containerd[1726]: 2025-10-13 05:53:19.194 [INFO][4321] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef Oct 13 05:53:19.268905 containerd[1726]: 2025-10-13 05:53:19.199 [INFO][4321] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268905 containerd[1726]: 2025-10-13 05:53:19.207 [INFO][4321] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.1/26] block=192.168.107.0/26 handle="k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268905 containerd[1726]: 2025-10-13 05:53:19.207 [INFO][4321] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.1/26] handle="k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:19.268905 containerd[1726]: 2025-10-13 05:53:19.207 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:53:19.268905 containerd[1726]: 2025-10-13 05:53:19.207 [INFO][4321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.1/26] IPv6=[] ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" HandleID="k8s-pod-network.1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Workload="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" Oct 13 05:53:19.269069 containerd[1726]: 2025-10-13 05:53:19.211 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Namespace="calico-system" Pod="whisker-76f598fd89-lx5vm" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0", GenerateName:"whisker-76f598fd89-", Namespace:"calico-system", SelfLink:"", UID:"7a91b6ae-3294-4159-9cb2-1253e7adce6c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 53, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76f598fd89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"", Pod:"whisker-76f598fd89-lx5vm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf6ba9f636a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:19.269069 containerd[1726]: 2025-10-13 05:53:19.211 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.1/32] ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Namespace="calico-system" Pod="whisker-76f598fd89-lx5vm" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" Oct 13 05:53:19.269154 containerd[1726]: 2025-10-13 05:53:19.211 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf6ba9f636a ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Namespace="calico-system" Pod="whisker-76f598fd89-lx5vm" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" Oct 13 05:53:19.269154 containerd[1726]: 2025-10-13 05:53:19.245 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Namespace="calico-system" Pod="whisker-76f598fd89-lx5vm" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" Oct 13 05:53:19.269217 containerd[1726]: 2025-10-13 05:53:19.246 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Namespace="calico-system" Pod="whisker-76f598fd89-lx5vm" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0", GenerateName:"whisker-76f598fd89-", Namespace:"calico-system", SelfLink:"", UID:"7a91b6ae-3294-4159-9cb2-1253e7adce6c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 53, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76f598fd89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef", Pod:"whisker-76f598fd89-lx5vm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf6ba9f636a", MAC:"3e:6d:76:13:9b:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:19.269274 containerd[1726]: 2025-10-13 05:53:19.262 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" Namespace="calico-system" Pod="whisker-76f598fd89-lx5vm" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-whisker--76f598fd89--lx5vm-eth0" Oct 13 05:53:19.332618 containerd[1726]: time="2025-10-13T05:53:19.332575216Z" level=info msg="connecting to shim 1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef" address="unix:///run/containerd/s/39fb6428246cf41b54633baee058183c5a7262f38afd23d8b05057bdc7dc35fa" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:19.374341 systemd[1]: Started cri-containerd-1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef.scope - libcontainer container 1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef. Oct 13 05:53:19.467493 containerd[1726]: time="2025-10-13T05:53:19.467451800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76f598fd89-lx5vm,Uid:7a91b6ae-3294-4159-9cb2-1253e7adce6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef\"" Oct 13 05:53:19.469852 containerd[1726]: time="2025-10-13T05:53:19.469820228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:53:19.905946 systemd-networkd[1352]: vxlan.calico: Link UP Oct 13 05:53:19.905957 systemd-networkd[1352]: vxlan.calico: Gained carrier Oct 13 05:53:20.609527 containerd[1726]: time="2025-10-13T05:53:20.609468624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:20.612073 containerd[1726]: time="2025-10-13T05:53:20.612037992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:53:20.613630 kubelet[3192]: I1013 05:53:20.613596 3192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e795c0a0-c779-43f5-a016-8498524e219f" path="/var/lib/kubelet/pods/e795c0a0-c779-43f5-a016-8498524e219f/volumes" Oct 13 05:53:20.615365 containerd[1726]: time="2025-10-13T05:53:20.615320492Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:20.620912 containerd[1726]: time="2025-10-13T05:53:20.620369088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:20.620912 containerd[1726]: time="2025-10-13T05:53:20.620789300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.150872939s" Oct 13 05:53:20.620912 containerd[1726]: time="2025-10-13T05:53:20.620816056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:53:20.626601 containerd[1726]: time="2025-10-13T05:53:20.626566160Z" level=info msg="CreateContainer within sandbox \"1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:53:20.637313 systemd-networkd[1352]: calicf6ba9f636a: Gained IPv6LL Oct 13 05:53:20.650721 containerd[1726]: time="2025-10-13T05:53:20.648348268Z" level=info msg="Container dda808bd6d4d60eee12727afb1973999163f0e7300e8a0682dc4eadde4181372: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:20.678648 containerd[1726]: time="2025-10-13T05:53:20.678624809Z" level=info msg="CreateContainer within sandbox \"1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"dda808bd6d4d60eee12727afb1973999163f0e7300e8a0682dc4eadde4181372\"" Oct 13 05:53:20.680242 containerd[1726]: time="2025-10-13T05:53:20.679036514Z" level=info msg="StartContainer for \"dda808bd6d4d60eee12727afb1973999163f0e7300e8a0682dc4eadde4181372\"" Oct 13 05:53:20.680242 containerd[1726]: time="2025-10-13T05:53:20.680017865Z" level=info msg="connecting to shim dda808bd6d4d60eee12727afb1973999163f0e7300e8a0682dc4eadde4181372" address="unix:///run/containerd/s/39fb6428246cf41b54633baee058183c5a7262f38afd23d8b05057bdc7dc35fa" protocol=ttrpc version=3 Oct 13 05:53:20.699332 systemd[1]: Started cri-containerd-dda808bd6d4d60eee12727afb1973999163f0e7300e8a0682dc4eadde4181372.scope - libcontainer container dda808bd6d4d60eee12727afb1973999163f0e7300e8a0682dc4eadde4181372. Oct 13 05:53:20.745991 containerd[1726]: time="2025-10-13T05:53:20.745938875Z" level=info msg="StartContainer for \"dda808bd6d4d60eee12727afb1973999163f0e7300e8a0682dc4eadde4181372\" returns successfully" Oct 13 05:53:20.748627 containerd[1726]: time="2025-10-13T05:53:20.748581373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:53:21.277435 systemd-networkd[1352]: vxlan.calico: Gained IPv6LL Oct 13 05:53:23.004503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319781046.mount: Deactivated successfully. Oct 13 05:53:23.055066 containerd[1726]: time="2025-10-13T05:53:23.055023114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:23.057564 containerd[1726]: time="2025-10-13T05:53:23.057536949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:53:23.071485 containerd[1726]: time="2025-10-13T05:53:23.071435643Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:23.074898 containerd[1726]: time="2025-10-13T05:53:23.074851535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:23.075459 containerd[1726]: time="2025-10-13T05:53:23.075288406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.326576428s" Oct 13 05:53:23.075459 containerd[1726]: time="2025-10-13T05:53:23.075321064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:53:23.081148 containerd[1726]: time="2025-10-13T05:53:23.081120996Z" level=info msg="CreateContainer within sandbox \"1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:53:23.111270 containerd[1726]: time="2025-10-13T05:53:23.110373679Z" level=info msg="Container 14bbd2b2e2468cafc5948eb194f72f39647591ee2acd54720240cf7390d33121: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:23.115605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477087601.mount: Deactivated successfully. Oct 13 05:53:23.134556 containerd[1726]: time="2025-10-13T05:53:23.134529604Z" level=info msg="CreateContainer within sandbox \"1cc5e09d7162d626d11f117c2b3c79dff388fe99a90196db1ef86f8495505cef\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"14bbd2b2e2468cafc5948eb194f72f39647591ee2acd54720240cf7390d33121\"" Oct 13 05:53:23.134986 containerd[1726]: time="2025-10-13T05:53:23.134965284Z" level=info msg="StartContainer for \"14bbd2b2e2468cafc5948eb194f72f39647591ee2acd54720240cf7390d33121\"" Oct 13 05:53:23.135806 containerd[1726]: time="2025-10-13T05:53:23.135782181Z" level=info msg="connecting to shim 14bbd2b2e2468cafc5948eb194f72f39647591ee2acd54720240cf7390d33121" address="unix:///run/containerd/s/39fb6428246cf41b54633baee058183c5a7262f38afd23d8b05057bdc7dc35fa" protocol=ttrpc version=3 Oct 13 05:53:23.159342 systemd[1]: Started cri-containerd-14bbd2b2e2468cafc5948eb194f72f39647591ee2acd54720240cf7390d33121.scope - libcontainer container 14bbd2b2e2468cafc5948eb194f72f39647591ee2acd54720240cf7390d33121. Oct 13 05:53:23.210107 containerd[1726]: time="2025-10-13T05:53:23.210080489Z" level=info msg="StartContainer for \"14bbd2b2e2468cafc5948eb194f72f39647591ee2acd54720240cf7390d33121\" returns successfully" Oct 13 05:53:23.772283 kubelet[3192]: I1013 05:53:23.771329 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76f598fd89-lx5vm" podStartSLOduration=2.164491709 podStartE2EDuration="5.7713136s" podCreationTimestamp="2025-10-13 05:53:18 +0000 UTC" firstStartedPulling="2025-10-13 05:53:19.469236982 +0000 UTC m=+36.956007496" lastFinishedPulling="2025-10-13 05:53:23.076058867 +0000 UTC m=+40.562829387" observedRunningTime="2025-10-13 05:53:23.771043284 +0000 UTC m=+41.257813806" watchObservedRunningTime="2025-10-13 05:53:23.7713136 +0000 UTC m=+41.258084131" Oct 13 05:53:24.612495 containerd[1726]: time="2025-10-13T05:53:24.612212929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2nwjz,Uid:92fa068c-877d-4748-8370-0fa59cfeb840,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:24.612495 containerd[1726]: time="2025-10-13T05:53:24.612258134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-tm8wl,Uid:e6bc447c-28f5-40f9-89ab-682202b1d25c,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:24.612495 containerd[1726]: time="2025-10-13T05:53:24.612212929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd66766-ctwdl,Uid:2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:53:24.612495 containerd[1726]: time="2025-10-13T05:53:24.612439395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd66766-4rf9v,Uid:36ee441e-a553-45a0-b9b4-51e7dca487a0,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:53:24.613277 containerd[1726]: time="2025-10-13T05:53:24.613150529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-844nx,Uid:846c13f8-9e67-46e6-b031-cc0dfeca9023,Namespace:kube-system,Attempt:0,}" Oct 13 05:53:24.853443 systemd-networkd[1352]: calib67e14ad285: Link UP Oct 13 05:53:24.854002 systemd-networkd[1352]: calib67e14ad285: Gained carrier Oct 13 05:53:24.870450 containerd[1726]: 2025-10-13 05:53:24.717 [INFO][4652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0 goldmane-54d579b49d- calico-system e6bc447c-28f5-40f9-89ab-682202b1d25c 823 0 2025-10-13 05:52:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-a-4938a72943 goldmane-54d579b49d-tm8wl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib67e14ad285 [] [] }} ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Namespace="calico-system" Pod="goldmane-54d579b49d-tm8wl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-" Oct 13 05:53:24.870450 containerd[1726]: 2025-10-13 05:53:24.718 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Namespace="calico-system" Pod="goldmane-54d579b49d-tm8wl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" Oct 13 05:53:24.870450 containerd[1726]: 2025-10-13 05:53:24.798 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" HandleID="k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Workload="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.799 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" HandleID="k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Workload="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-a-4938a72943", "pod":"goldmane-54d579b49d-tm8wl", "timestamp":"2025-10-13 05:53:24.798186781 +0000 UTC"}, Hostname:"ci-4459.1.0-a-4938a72943", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.799 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.799 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.799 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-4938a72943' Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.810 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.815 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.824 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.829 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870639 containerd[1726]: 2025-10-13 05:53:24.832 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870890 containerd[1726]: 2025-10-13 05:53:24.832 [INFO][4712] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870890 containerd[1726]: 2025-10-13 05:53:24.834 [INFO][4712] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6 Oct 13 05:53:24.870890 containerd[1726]: 2025-10-13 05:53:24.840 [INFO][4712] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870890 containerd[1726]: 2025-10-13 05:53:24.846 [INFO][4712] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.2/26] block=192.168.107.0/26 handle="k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870890 containerd[1726]: 2025-10-13 05:53:24.846 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.2/26] handle="k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.870890 containerd[1726]: 2025-10-13 05:53:24.846 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:53:24.870890 containerd[1726]: 2025-10-13 05:53:24.846 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.2/26] IPv6=[] ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" HandleID="k8s-pod-network.4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Workload="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" Oct 13 05:53:24.871071 containerd[1726]: 2025-10-13 05:53:24.849 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Namespace="calico-system" Pod="goldmane-54d579b49d-tm8wl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e6bc447c-28f5-40f9-89ab-682202b1d25c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"", Pod:"goldmane-54d579b49d-tm8wl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib67e14ad285", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:24.871071 containerd[1726]: 2025-10-13 05:53:24.849 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.2/32] ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Namespace="calico-system" Pod="goldmane-54d579b49d-tm8wl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" Oct 13 05:53:24.871164 containerd[1726]: 2025-10-13 05:53:24.849 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib67e14ad285 ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Namespace="calico-system" Pod="goldmane-54d579b49d-tm8wl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" Oct 13 05:53:24.871164 containerd[1726]: 2025-10-13 05:53:24.854 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Namespace="calico-system" Pod="goldmane-54d579b49d-tm8wl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" Oct 13 05:53:24.871237 containerd[1726]: 2025-10-13 05:53:24.856 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Namespace="calico-system" Pod="goldmane-54d579b49d-tm8wl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e6bc447c-28f5-40f9-89ab-682202b1d25c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6", Pod:"goldmane-54d579b49d-tm8wl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib67e14ad285", MAC:"fe:fc:82:fb:9f:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:24.871295 containerd[1726]: 2025-10-13 05:53:24.868 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" Namespace="calico-system" Pod="goldmane-54d579b49d-tm8wl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-goldmane--54d579b49d--tm8wl-eth0" Oct 13 05:53:24.929545 containerd[1726]: time="2025-10-13T05:53:24.929502772Z" level=info msg="connecting to shim 4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6" address="unix:///run/containerd/s/fc67cd6aaa4a416c65b9437a1f7ad2a786a96eb39ebf67ce17b2874051414f1a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:24.951374 systemd[1]: Started cri-containerd-4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6.scope - libcontainer container 4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6. Oct 13 05:53:24.957404 systemd-networkd[1352]: cali4e871a949b4: Link UP Oct 13 05:53:24.959259 systemd-networkd[1352]: cali4e871a949b4: Gained carrier Oct 13 05:53:24.992500 containerd[1726]: 2025-10-13 05:53:24.731 [INFO][4662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0 calico-apiserver-7fcd66766- calico-apiserver 2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d 821 0 2025-10-13 05:52:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcd66766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-a-4938a72943 calico-apiserver-7fcd66766-ctwdl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e871a949b4 [] [] }} ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-ctwdl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-" Oct 13 05:53:24.992500 containerd[1726]: 2025-10-13 05:53:24.734 [INFO][4662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-ctwdl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" Oct 13 05:53:24.992500 containerd[1726]: 2025-10-13 05:53:24.805 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" HandleID="k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.805 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" HandleID="k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-a-4938a72943", "pod":"calico-apiserver-7fcd66766-ctwdl", "timestamp":"2025-10-13 05:53:24.80575092 +0000 UTC"}, Hostname:"ci-4459.1.0-a-4938a72943", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.805 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.846 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.846 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-4938a72943' Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.911 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.914 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.919 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.921 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992707 containerd[1726]: 2025-10-13 05:53:24.923 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992931 containerd[1726]: 2025-10-13 05:53:24.923 [INFO][4718] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992931 containerd[1726]: 2025-10-13 05:53:24.924 [INFO][4718] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f Oct 13 05:53:24.992931 containerd[1726]: 2025-10-13 05:53:24.929 [INFO][4718] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992931 containerd[1726]: 2025-10-13 05:53:24.941 [INFO][4718] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.3/26] block=192.168.107.0/26 handle="k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992931 containerd[1726]: 2025-10-13 05:53:24.941 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.3/26] handle="k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:24.992931 containerd[1726]: 2025-10-13 05:53:24.941 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:53:24.992931 containerd[1726]: 2025-10-13 05:53:24.941 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.3/26] IPv6=[] ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" HandleID="k8s-pod-network.26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" Oct 13 05:53:24.993086 containerd[1726]: 2025-10-13 05:53:24.949 [INFO][4662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-ctwdl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0", GenerateName:"calico-apiserver-7fcd66766-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd66766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"", Pod:"calico-apiserver-7fcd66766-ctwdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e871a949b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:24.993154 containerd[1726]: 2025-10-13 05:53:24.949 [INFO][4662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.3/32] ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-ctwdl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" Oct 13 05:53:24.993154 containerd[1726]: 2025-10-13 05:53:24.949 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e871a949b4 ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-ctwdl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" Oct 13 05:53:24.993154 containerd[1726]: 2025-10-13 05:53:24.960 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-ctwdl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" Oct 13 05:53:24.993253 containerd[1726]: 2025-10-13 05:53:24.962 [INFO][4662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-ctwdl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0", GenerateName:"calico-apiserver-7fcd66766-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd66766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f", Pod:"calico-apiserver-7fcd66766-ctwdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e871a949b4", MAC:"06:fc:bd:02:ce:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:24.993313 containerd[1726]: 2025-10-13 05:53:24.988 [INFO][4662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-ctwdl" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--ctwdl-eth0" Oct 13 05:53:25.036619 containerd[1726]: time="2025-10-13T05:53:25.036546526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-tm8wl,Uid:e6bc447c-28f5-40f9-89ab-682202b1d25c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6\"" Oct 13 05:53:25.037747 containerd[1726]: time="2025-10-13T05:53:25.037687518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:53:25.049687 containerd[1726]: time="2025-10-13T05:53:25.049546085Z" level=info msg="connecting to shim 26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f" address="unix:///run/containerd/s/8e369067f36d1c5bda42d1b5e0dee7021b39aa9c0dc0ed2f94bb03d6f30d12a1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:25.061428 systemd-networkd[1352]: calie64d2f12a6c: Link UP Oct 13 05:53:25.062204 systemd-networkd[1352]: calie64d2f12a6c: Gained carrier Oct 13 05:53:25.081745 containerd[1726]: 2025-10-13 05:53:24.739 [INFO][4677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0 calico-apiserver-7fcd66766- calico-apiserver 36ee441e-a553-45a0-b9b4-51e7dca487a0 822 0 2025-10-13 05:52:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcd66766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-a-4938a72943 calico-apiserver-7fcd66766-4rf9v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie64d2f12a6c [] [] }} ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-4rf9v" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-" Oct 13 05:53:25.081745 containerd[1726]: 2025-10-13 05:53:24.739 [INFO][4677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-4rf9v" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" Oct 13 05:53:25.081745 containerd[1726]: 2025-10-13 05:53:24.828 [INFO][4721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" HandleID="k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:24.829 [INFO][4721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" HandleID="k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-a-4938a72943", "pod":"calico-apiserver-7fcd66766-4rf9v", "timestamp":"2025-10-13 05:53:24.826908636 +0000 UTC"}, Hostname:"ci-4459.1.0-a-4938a72943", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:24.829 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:24.941 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:24.942 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-4938a72943' Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:25.013 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:25.020 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:25.025 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:25.027 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.081968 containerd[1726]: 2025-10-13 05:53:25.029 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.082252 containerd[1726]: 2025-10-13 05:53:25.029 [INFO][4721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.082252 containerd[1726]: 2025-10-13 05:53:25.031 [INFO][4721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f Oct 13 05:53:25.082252 containerd[1726]: 2025-10-13 05:53:25.041 [INFO][4721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.082252 containerd[1726]: 2025-10-13 05:53:25.053 [INFO][4721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.4/26] block=192.168.107.0/26 handle="k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.082252 containerd[1726]: 2025-10-13 05:53:25.053 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.4/26] handle="k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.082252 containerd[1726]: 2025-10-13 05:53:25.054 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:53:25.082252 containerd[1726]: 2025-10-13 05:53:25.054 [INFO][4721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.4/26] IPv6=[] ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" HandleID="k8s-pod-network.2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" Oct 13 05:53:25.082404 containerd[1726]: 2025-10-13 05:53:25.057 [INFO][4677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-4rf9v" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0", GenerateName:"calico-apiserver-7fcd66766-", Namespace:"calico-apiserver", SelfLink:"", UID:"36ee441e-a553-45a0-b9b4-51e7dca487a0", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd66766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"", Pod:"calico-apiserver-7fcd66766-4rf9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie64d2f12a6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:25.082467 containerd[1726]: 2025-10-13 05:53:25.057 [INFO][4677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.4/32] ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-4rf9v" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" Oct 13 05:53:25.082467 containerd[1726]: 2025-10-13 05:53:25.057 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie64d2f12a6c ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-4rf9v" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" Oct 13 05:53:25.082467 containerd[1726]: 2025-10-13 05:53:25.062 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-4rf9v" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" Oct 13 05:53:25.082690 containerd[1726]: 2025-10-13 05:53:25.064 [INFO][4677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-4rf9v" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0", GenerateName:"calico-apiserver-7fcd66766-", Namespace:"calico-apiserver", SelfLink:"", UID:"36ee441e-a553-45a0-b9b4-51e7dca487a0", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd66766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f", Pod:"calico-apiserver-7fcd66766-4rf9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie64d2f12a6c", MAC:"12:8d:ad:49:64:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:25.082805 containerd[1726]: 2025-10-13 05:53:25.077 [INFO][4677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd66766-4rf9v" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--apiserver--7fcd66766--4rf9v-eth0" Oct 13 05:53:25.091717 systemd[1]: Started cri-containerd-26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f.scope - libcontainer container 26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f. Oct 13 05:53:25.143225 containerd[1726]: time="2025-10-13T05:53:25.142329895Z" level=info msg="connecting to shim 2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f" address="unix:///run/containerd/s/02cf43d5acf27522d072d739c78ec5621e98ae42c7b7d65b762afa8f0723fdc4" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:25.161430 containerd[1726]: time="2025-10-13T05:53:25.161401829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd66766-ctwdl,Uid:2b69a0e3-c65e-46e1-9bc8-e5fa93bccd7d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f\"" Oct 13 05:53:25.188332 systemd[1]: Started cri-containerd-2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f.scope - libcontainer container 2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f. Oct 13 05:53:25.191770 systemd-networkd[1352]: cali1d234dde070: Link UP Oct 13 05:53:25.192839 systemd-networkd[1352]: cali1d234dde070: Gained carrier Oct 13 05:53:25.212270 containerd[1726]: 2025-10-13 05:53:24.746 [INFO][4682] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0 csi-node-driver- calico-system 92fa068c-877d-4748-8370-0fa59cfeb840 699 0 2025-10-13 05:53:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-a-4938a72943 csi-node-driver-2nwjz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1d234dde070 [] [] }} ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Namespace="calico-system" Pod="csi-node-driver-2nwjz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-" Oct 13 05:53:25.212270 containerd[1726]: 2025-10-13 05:53:24.748 [INFO][4682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Namespace="calico-system" Pod="csi-node-driver-2nwjz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" Oct 13 05:53:25.212270 containerd[1726]: 2025-10-13 05:53:24.832 [INFO][4729] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" HandleID="k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Workload="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:24.832 [INFO][4729] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" HandleID="k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Workload="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-a-4938a72943", "pod":"csi-node-driver-2nwjz", "timestamp":"2025-10-13 05:53:24.832475275 +0000 UTC"}, Hostname:"ci-4459.1.0-a-4938a72943", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:24.832 [INFO][4729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:25.054 [INFO][4729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:25.054 [INFO][4729] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-4938a72943' Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:25.113 [INFO][4729] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:25.121 [INFO][4729] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:25.128 [INFO][4729] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:25.130 [INFO][4729] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.212998 containerd[1726]: 2025-10-13 05:53:25.134 [INFO][4729] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.213263 containerd[1726]: 2025-10-13 05:53:25.134 [INFO][4729] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.213263 containerd[1726]: 2025-10-13 05:53:25.135 [INFO][4729] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398 Oct 13 05:53:25.213263 containerd[1726]: 2025-10-13 05:53:25.142 [INFO][4729] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.213263 containerd[1726]: 2025-10-13 05:53:25.164 [INFO][4729] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.5/26] block=192.168.107.0/26 handle="k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.213263 containerd[1726]: 2025-10-13 05:53:25.164 [INFO][4729] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.5/26] handle="k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.213263 containerd[1726]: 2025-10-13 05:53:25.164 [INFO][4729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:53:25.213263 containerd[1726]: 2025-10-13 05:53:25.164 [INFO][4729] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.5/26] IPv6=[] ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" HandleID="k8s-pod-network.d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Workload="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" Oct 13 05:53:25.213416 containerd[1726]: 2025-10-13 05:53:25.170 [INFO][4682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Namespace="calico-system" Pod="csi-node-driver-2nwjz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"92fa068c-877d-4748-8370-0fa59cfeb840", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"", Pod:"csi-node-driver-2nwjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d234dde070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:25.213486 containerd[1726]: 2025-10-13 05:53:25.170 [INFO][4682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.5/32] ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Namespace="calico-system" Pod="csi-node-driver-2nwjz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" Oct 13 05:53:25.213486 containerd[1726]: 2025-10-13 05:53:25.170 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d234dde070 ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Namespace="calico-system" Pod="csi-node-driver-2nwjz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" Oct 13 05:53:25.213486 containerd[1726]: 2025-10-13 05:53:25.193 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Namespace="calico-system" Pod="csi-node-driver-2nwjz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" Oct 13 05:53:25.213567 containerd[1726]: 2025-10-13 05:53:25.194 [INFO][4682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Namespace="calico-system" Pod="csi-node-driver-2nwjz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"92fa068c-877d-4748-8370-0fa59cfeb840", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398", Pod:"csi-node-driver-2nwjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d234dde070", MAC:"ba:44:0e:f5:c1:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:25.213624 containerd[1726]: 2025-10-13 05:53:25.210 [INFO][4682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" Namespace="calico-system" Pod="csi-node-driver-2nwjz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-csi--node--driver--2nwjz-eth0" Oct 13 05:53:25.262652 containerd[1726]: time="2025-10-13T05:53:25.262229809Z" level=info msg="connecting to shim d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398" address="unix:///run/containerd/s/4190b642ca630b9a2e152cd3fbc6527aa935d099e09e691ce28b7b17f0edcc78" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:25.263124 containerd[1726]: time="2025-10-13T05:53:25.263081015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd66766-4rf9v,Uid:36ee441e-a553-45a0-b9b4-51e7dca487a0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f\"" Oct 13 05:53:25.270071 systemd-networkd[1352]: cali84d4bcc688d: Link UP Oct 13 05:53:25.271824 systemd-networkd[1352]: cali84d4bcc688d: Gained carrier Oct 13 05:53:25.292261 containerd[1726]: 2025-10-13 05:53:24.768 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0 coredns-674b8bbfcf- kube-system 846c13f8-9e67-46e6-b031-cc0dfeca9023 818 0 2025-10-13 05:52:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-a-4938a72943 coredns-674b8bbfcf-844nx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84d4bcc688d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-844nx" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-" Oct 13 05:53:25.292261 containerd[1726]: 2025-10-13 05:53:24.768 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-844nx" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" Oct 13 05:53:25.292261 containerd[1726]: 2025-10-13 05:53:24.834 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" HandleID="k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Workload="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:24.834 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" HandleID="k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Workload="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032ccd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-a-4938a72943", "pod":"coredns-674b8bbfcf-844nx", "timestamp":"2025-10-13 05:53:24.834231469 +0000 UTC"}, Hostname:"ci-4459.1.0-a-4938a72943", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:24.834 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:25.167 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:25.167 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-4938a72943' Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:25.214 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:25.221 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:25.228 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:25.230 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.292445 containerd[1726]: 2025-10-13 05:53:25.233 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.293036 containerd[1726]: 2025-10-13 05:53:25.233 [INFO][4738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.293036 containerd[1726]: 2025-10-13 05:53:25.235 [INFO][4738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad Oct 13 05:53:25.293036 containerd[1726]: 2025-10-13 05:53:25.245 [INFO][4738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.293036 containerd[1726]: 2025-10-13 05:53:25.259 [INFO][4738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.6/26] block=192.168.107.0/26 handle="k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.293036 containerd[1726]: 2025-10-13 05:53:25.259 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.6/26] handle="k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.293036 containerd[1726]: 2025-10-13 05:53:25.259 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:53:25.293036 containerd[1726]: 2025-10-13 05:53:25.259 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.6/26] IPv6=[] ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" HandleID="k8s-pod-network.15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Workload="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" Oct 13 05:53:25.293514 containerd[1726]: 2025-10-13 05:53:25.266 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-844nx" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"846c13f8-9e67-46e6-b031-cc0dfeca9023", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"", Pod:"coredns-674b8bbfcf-844nx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84d4bcc688d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:25.293514 containerd[1726]: 2025-10-13 05:53:25.267 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.6/32] ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-844nx" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" Oct 13 05:53:25.293514 containerd[1726]: 2025-10-13 05:53:25.267 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84d4bcc688d ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-844nx" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" Oct 13 05:53:25.293514 containerd[1726]: 2025-10-13 05:53:25.271 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-844nx" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" Oct 13 05:53:25.293514 containerd[1726]: 2025-10-13 05:53:25.272 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-844nx" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"846c13f8-9e67-46e6-b031-cc0dfeca9023", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad", Pod:"coredns-674b8bbfcf-844nx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84d4bcc688d", MAC:"ba:a3:f0:24:56:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:25.293514 containerd[1726]: 2025-10-13 05:53:25.288 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-844nx" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--844nx-eth0" Oct 13 05:53:25.306322 systemd[1]: Started cri-containerd-d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398.scope - libcontainer container d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398. Oct 13 05:53:25.328870 containerd[1726]: time="2025-10-13T05:53:25.328843424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2nwjz,Uid:92fa068c-877d-4748-8370-0fa59cfeb840,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398\"" Oct 13 05:53:25.342640 containerd[1726]: time="2025-10-13T05:53:25.342586263Z" level=info msg="connecting to shim 15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad" address="unix:///run/containerd/s/0cd732ba9e0ca7d415d28b69766221874aa734433885bc14ea255f2c0c220e97" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:25.362309 systemd[1]: Started cri-containerd-15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad.scope - libcontainer container 15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad. Oct 13 05:53:25.401783 containerd[1726]: time="2025-10-13T05:53:25.401721377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-844nx,Uid:846c13f8-9e67-46e6-b031-cc0dfeca9023,Namespace:kube-system,Attempt:0,} returns sandbox id \"15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad\"" Oct 13 05:53:25.410025 containerd[1726]: time="2025-10-13T05:53:25.409318127Z" level=info msg="CreateContainer within sandbox \"15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:53:25.433833 containerd[1726]: time="2025-10-13T05:53:25.433805752Z" level=info msg="Container 66dfe4331d8936241b9971852f923cabe55703f82559224a7627079640d44a3e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:25.450899 containerd[1726]: time="2025-10-13T05:53:25.450873385Z" level=info msg="CreateContainer within sandbox \"15f4ee3f067560cba0eafc2874130c65811d703667ff158ffb775700922485ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66dfe4331d8936241b9971852f923cabe55703f82559224a7627079640d44a3e\"" Oct 13 05:53:25.452147 containerd[1726]: time="2025-10-13T05:53:25.451368018Z" level=info msg="StartContainer for \"66dfe4331d8936241b9971852f923cabe55703f82559224a7627079640d44a3e\"" Oct 13 05:53:25.452147 containerd[1726]: time="2025-10-13T05:53:25.452071420Z" level=info msg="connecting to shim 66dfe4331d8936241b9971852f923cabe55703f82559224a7627079640d44a3e" address="unix:///run/containerd/s/0cd732ba9e0ca7d415d28b69766221874aa734433885bc14ea255f2c0c220e97" protocol=ttrpc version=3 Oct 13 05:53:25.467346 systemd[1]: Started cri-containerd-66dfe4331d8936241b9971852f923cabe55703f82559224a7627079640d44a3e.scope - libcontainer container 66dfe4331d8936241b9971852f923cabe55703f82559224a7627079640d44a3e. Oct 13 05:53:25.494775 containerd[1726]: time="2025-10-13T05:53:25.494755093Z" level=info msg="StartContainer for \"66dfe4331d8936241b9971852f923cabe55703f82559224a7627079640d44a3e\" returns successfully" Oct 13 05:53:25.611917 containerd[1726]: time="2025-10-13T05:53:25.611830728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rncrz,Uid:c879da08-5ee1-49ec-9e18-c510bd820afb,Namespace:kube-system,Attempt:0,}" Oct 13 05:53:25.715347 systemd-networkd[1352]: cali0368f0e1c6a: Link UP Oct 13 05:53:25.715543 systemd-networkd[1352]: cali0368f0e1c6a: Gained carrier Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.657 [INFO][5075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0 coredns-674b8bbfcf- kube-system c879da08-5ee1-49ec-9e18-c510bd820afb 816 0 2025-10-13 05:52:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-a-4938a72943 coredns-674b8bbfcf-rncrz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0368f0e1c6a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rncrz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.657 [INFO][5075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rncrz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.679 [INFO][5086] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" HandleID="k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Workload="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.679 [INFO][5086] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" HandleID="k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Workload="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-a-4938a72943", "pod":"coredns-674b8bbfcf-rncrz", "timestamp":"2025-10-13 05:53:25.67949359 +0000 UTC"}, Hostname:"ci-4459.1.0-a-4938a72943", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.679 [INFO][5086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.679 [INFO][5086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.679 [INFO][5086] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-4938a72943' Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.686 [INFO][5086] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.689 [INFO][5086] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.692 [INFO][5086] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.693 [INFO][5086] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.695 [INFO][5086] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.695 [INFO][5086] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.696 [INFO][5086] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.700 [INFO][5086] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.710 [INFO][5086] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.7/26] block=192.168.107.0/26 handle="k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.710 [INFO][5086] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.7/26] handle="k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.710 [INFO][5086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:53:25.731862 containerd[1726]: 2025-10-13 05:53:25.710 [INFO][5086] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.7/26] IPv6=[] ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" HandleID="k8s-pod-network.b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Workload="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" Oct 13 05:53:25.733842 containerd[1726]: 2025-10-13 05:53:25.711 [INFO][5075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rncrz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c879da08-5ee1-49ec-9e18-c510bd820afb", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"", Pod:"coredns-674b8bbfcf-rncrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0368f0e1c6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:25.733842 containerd[1726]: 2025-10-13 05:53:25.711 [INFO][5075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.7/32] ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rncrz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" Oct 13 05:53:25.733842 containerd[1726]: 2025-10-13 05:53:25.711 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0368f0e1c6a ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rncrz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" Oct 13 05:53:25.733842 containerd[1726]: 2025-10-13 05:53:25.714 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rncrz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" Oct 13 05:53:25.733842 containerd[1726]: 2025-10-13 05:53:25.715 [INFO][5075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rncrz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c879da08-5ee1-49ec-9e18-c510bd820afb", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e", Pod:"coredns-674b8bbfcf-rncrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0368f0e1c6a", MAC:"26:d0:2e:fd:1f:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:25.733842 containerd[1726]: 2025-10-13 05:53:25.729 [INFO][5075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rncrz" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-coredns--674b8bbfcf--rncrz-eth0" Oct 13 05:53:25.782097 kubelet[3192]: I1013 05:53:25.781604 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-844nx" podStartSLOduration=38.781587123 podStartE2EDuration="38.781587123s" podCreationTimestamp="2025-10-13 05:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:53:25.780795717 +0000 UTC m=+43.267566239" watchObservedRunningTime="2025-10-13 05:53:25.781587123 +0000 UTC m=+43.268357647" Oct 13 05:53:25.800348 containerd[1726]: time="2025-10-13T05:53:25.800275975Z" level=info msg="connecting to shim b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e" address="unix:///run/containerd/s/6ea82b3b24bafd88c7dc7615f0064987506b92883e84787f78cc00366b5c066f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:25.840408 systemd[1]: Started cri-containerd-b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e.scope - libcontainer container b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e. Oct 13 05:53:25.919592 containerd[1726]: time="2025-10-13T05:53:25.919560025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rncrz,Uid:c879da08-5ee1-49ec-9e18-c510bd820afb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e\"" Oct 13 05:53:25.926147 containerd[1726]: time="2025-10-13T05:53:25.926106268Z" level=info msg="CreateContainer within sandbox \"b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:53:25.954283 containerd[1726]: time="2025-10-13T05:53:25.954252824Z" level=info msg="Container b7b5871fee977967cb8c6680998bec5e416f2dbddd7093d5ac710ab1afb5623f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:25.966346 containerd[1726]: time="2025-10-13T05:53:25.966222197Z" level=info msg="CreateContainer within sandbox \"b08219ab269dce1313d5b79909a1041f112a873fadae6ffc136a51a8e1b2452e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b7b5871fee977967cb8c6680998bec5e416f2dbddd7093d5ac710ab1afb5623f\"" Oct 13 05:53:25.966823 containerd[1726]: time="2025-10-13T05:53:25.966776348Z" level=info msg="StartContainer for \"b7b5871fee977967cb8c6680998bec5e416f2dbddd7093d5ac710ab1afb5623f\"" Oct 13 05:53:25.968532 containerd[1726]: time="2025-10-13T05:53:25.967820535Z" level=info msg="connecting to shim b7b5871fee977967cb8c6680998bec5e416f2dbddd7093d5ac710ab1afb5623f" address="unix:///run/containerd/s/6ea82b3b24bafd88c7dc7615f0064987506b92883e84787f78cc00366b5c066f" protocol=ttrpc version=3 Oct 13 05:53:25.987312 systemd[1]: Started cri-containerd-b7b5871fee977967cb8c6680998bec5e416f2dbddd7093d5ac710ab1afb5623f.scope - libcontainer container b7b5871fee977967cb8c6680998bec5e416f2dbddd7093d5ac710ab1afb5623f. Oct 13 05:53:26.014648 containerd[1726]: time="2025-10-13T05:53:26.014622121Z" level=info msg="StartContainer for \"b7b5871fee977967cb8c6680998bec5e416f2dbddd7093d5ac710ab1afb5623f\" returns successfully" Oct 13 05:53:26.526704 systemd-networkd[1352]: cali4e871a949b4: Gained IPv6LL Oct 13 05:53:26.590443 systemd-networkd[1352]: calib67e14ad285: Gained IPv6LL Oct 13 05:53:26.653277 systemd-networkd[1352]: cali1d234dde070: Gained IPv6LL Oct 13 05:53:26.717289 systemd-networkd[1352]: calie64d2f12a6c: Gained IPv6LL Oct 13 05:53:26.717738 systemd-networkd[1352]: cali84d4bcc688d: Gained IPv6LL Oct 13 05:53:26.781391 systemd-networkd[1352]: cali0368f0e1c6a: Gained IPv6LL Oct 13 05:53:26.799935 kubelet[3192]: I1013 05:53:26.799880 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rncrz" podStartSLOduration=39.799864427 podStartE2EDuration="39.799864427s" podCreationTimestamp="2025-10-13 05:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:53:26.799544542 +0000 UTC m=+44.286315066" watchObservedRunningTime="2025-10-13 05:53:26.799864427 +0000 UTC m=+44.286634950" Oct 13 05:53:27.613524 containerd[1726]: time="2025-10-13T05:53:27.612632776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64bc8cdbdb-kghw9,Uid:be696de5-4e39-46ea-a28d-3409bd9514e0,Namespace:calico-system,Attempt:0,}" Oct 13 05:53:27.761300 systemd-networkd[1352]: cali27bcb4c1f28: Link UP Oct 13 05:53:27.762730 systemd-networkd[1352]: cali27bcb4c1f28: Gained carrier Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.664 [INFO][5192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0 calico-kube-controllers-64bc8cdbdb- calico-system be696de5-4e39-46ea-a28d-3409bd9514e0 814 0 2025-10-13 05:53:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64bc8cdbdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-a-4938a72943 calico-kube-controllers-64bc8cdbdb-kghw9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali27bcb4c1f28 [] [] }} ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Namespace="calico-system" Pod="calico-kube-controllers-64bc8cdbdb-kghw9" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.664 [INFO][5192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Namespace="calico-system" Pod="calico-kube-controllers-64bc8cdbdb-kghw9" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.705 [INFO][5204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" HandleID="k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.705 [INFO][5204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" HandleID="k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5270), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-a-4938a72943", "pod":"calico-kube-controllers-64bc8cdbdb-kghw9", "timestamp":"2025-10-13 05:53:27.705285187 +0000 UTC"}, Hostname:"ci-4459.1.0-a-4938a72943", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.705 [INFO][5204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.705 [INFO][5204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.705 [INFO][5204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-4938a72943' Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.714 [INFO][5204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.718 [INFO][5204] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.723 [INFO][5204] ipam/ipam.go 511: Trying affinity for 192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.725 [INFO][5204] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.729 [INFO][5204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.729 [INFO][5204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.731 [INFO][5204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.739 [INFO][5204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.754 [INFO][5204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.8/26] block=192.168.107.0/26 handle="k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.755 [INFO][5204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.8/26] handle="k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" host="ci-4459.1.0-a-4938a72943" Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.755 [INFO][5204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:53:27.784626 containerd[1726]: 2025-10-13 05:53:27.755 [INFO][5204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.8/26] IPv6=[] ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" HandleID="k8s-pod-network.4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Workload="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" Oct 13 05:53:27.785075 containerd[1726]: 2025-10-13 05:53:27.756 [INFO][5192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Namespace="calico-system" Pod="calico-kube-controllers-64bc8cdbdb-kghw9" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0", GenerateName:"calico-kube-controllers-64bc8cdbdb-", Namespace:"calico-system", SelfLink:"", UID:"be696de5-4e39-46ea-a28d-3409bd9514e0", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64bc8cdbdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"", Pod:"calico-kube-controllers-64bc8cdbdb-kghw9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27bcb4c1f28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:27.785075 containerd[1726]: 2025-10-13 05:53:27.756 [INFO][5192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.8/32] ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Namespace="calico-system" Pod="calico-kube-controllers-64bc8cdbdb-kghw9" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" Oct 13 05:53:27.785075 containerd[1726]: 2025-10-13 05:53:27.756 [INFO][5192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27bcb4c1f28 ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Namespace="calico-system" Pod="calico-kube-controllers-64bc8cdbdb-kghw9" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" Oct 13 05:53:27.785075 containerd[1726]: 2025-10-13 05:53:27.764 [INFO][5192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Namespace="calico-system" Pod="calico-kube-controllers-64bc8cdbdb-kghw9" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" Oct 13 05:53:27.785075 containerd[1726]: 2025-10-13 05:53:27.765 [INFO][5192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Namespace="calico-system" Pod="calico-kube-controllers-64bc8cdbdb-kghw9" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0", GenerateName:"calico-kube-controllers-64bc8cdbdb-", Namespace:"calico-system", SelfLink:"", UID:"be696de5-4e39-46ea-a28d-3409bd9514e0", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64bc8cdbdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-4938a72943", ContainerID:"4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c", Pod:"calico-kube-controllers-64bc8cdbdb-kghw9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27bcb4c1f28", MAC:"ee:a7:6e:16:c9:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:53:27.785075 containerd[1726]: 2025-10-13 05:53:27.781 [INFO][5192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" Namespace="calico-system" Pod="calico-kube-controllers-64bc8cdbdb-kghw9" WorkloadEndpoint="ci--4459.1.0--a--4938a72943-k8s-calico--kube--controllers--64bc8cdbdb--kghw9-eth0" Oct 13 05:53:27.824579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758403950.mount: Deactivated successfully. Oct 13 05:53:27.839026 containerd[1726]: time="2025-10-13T05:53:27.838991594Z" level=info msg="connecting to shim 4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c" address="unix:///run/containerd/s/28a0180c4a0e0d4cb6afb7d5efead851d343bca9f29b8da8b984cbdf11078079" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:53:27.862885 systemd[1]: Started cri-containerd-4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c.scope - libcontainer container 4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c. Oct 13 05:53:27.944131 containerd[1726]: time="2025-10-13T05:53:27.944046814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64bc8cdbdb-kghw9,Uid:be696de5-4e39-46ea-a28d-3409bd9514e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c\"" Oct 13 05:53:28.367428 containerd[1726]: time="2025-10-13T05:53:28.367318931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:28.369420 containerd[1726]: time="2025-10-13T05:53:28.369388819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:53:28.373492 containerd[1726]: time="2025-10-13T05:53:28.373441422Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:28.377512 containerd[1726]: time="2025-10-13T05:53:28.377377466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:28.378202 containerd[1726]: time="2025-10-13T05:53:28.378150127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.340434052s" Oct 13 05:53:28.378274 containerd[1726]: time="2025-10-13T05:53:28.378206021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:53:28.379371 containerd[1726]: time="2025-10-13T05:53:28.379350755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:53:28.388439 containerd[1726]: time="2025-10-13T05:53:28.388411482Z" level=info msg="CreateContainer within sandbox \"4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:53:28.409804 containerd[1726]: time="2025-10-13T05:53:28.409536852Z" level=info msg="Container b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:28.424922 containerd[1726]: time="2025-10-13T05:53:28.424896217Z" level=info msg="CreateContainer within sandbox \"4cf27510ada61ac61614080d58cebdd649e3a9c5bcecd7950adb75e8da17d0f6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\"" Oct 13 05:53:28.427077 containerd[1726]: time="2025-10-13T05:53:28.425375566Z" level=info msg="StartContainer for \"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\"" Oct 13 05:53:28.427077 containerd[1726]: time="2025-10-13T05:53:28.426209320Z" level=info msg="connecting to shim b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45" address="unix:///run/containerd/s/fc67cd6aaa4a416c65b9437a1f7ad2a786a96eb39ebf67ce17b2874051414f1a" protocol=ttrpc version=3 Oct 13 05:53:28.450331 systemd[1]: Started cri-containerd-b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45.scope - libcontainer container b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45. Oct 13 05:53:28.495802 containerd[1726]: time="2025-10-13T05:53:28.495775858Z" level=info msg="StartContainer for \"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" returns successfully" Oct 13 05:53:28.887862 containerd[1726]: time="2025-10-13T05:53:28.887822538Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"631c65fe50624aa03f401afcd4b28fdc3811cda8ece8c11331822c7f14b8e4cd\" pid:5320 exit_status:1 exited_at:{seconds:1760334808 nanos:887486414}" Oct 13 05:53:29.277479 systemd-networkd[1352]: cali27bcb4c1f28: Gained IPv6LL Oct 13 05:53:29.860074 containerd[1726]: time="2025-10-13T05:53:29.860030587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"c183cb8e952bf64b9d4274d36cea49412f6871435d88edb91a8774c5d6a33e30\" pid:5344 exit_status:1 exited_at:{seconds:1760334809 nanos:859300105}" Oct 13 05:53:30.892259 containerd[1726]: time="2025-10-13T05:53:30.892216673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"c7b34e90a6ee5f905e6bffe95c092474955361712045305a3d27e1f585165b93\" pid:5376 exited_at:{seconds:1760334810 nanos:891490456}" Oct 13 05:53:30.917938 kubelet[3192]: I1013 05:53:30.917878 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-tm8wl" podStartSLOduration=28.576321439 podStartE2EDuration="31.917860943s" podCreationTimestamp="2025-10-13 05:52:59 +0000 UTC" firstStartedPulling="2025-10-13 05:53:25.037503989 +0000 UTC m=+42.524274494" lastFinishedPulling="2025-10-13 05:53:28.379043485 +0000 UTC m=+45.865813998" observedRunningTime="2025-10-13 05:53:28.805242557 +0000 UTC m=+46.292013074" watchObservedRunningTime="2025-10-13 05:53:30.917860943 +0000 UTC m=+48.404631462" Oct 13 05:53:31.508275 containerd[1726]: time="2025-10-13T05:53:31.508232669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:31.510899 containerd[1726]: time="2025-10-13T05:53:31.510812977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:53:31.513435 containerd[1726]: time="2025-10-13T05:53:31.513382844Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:31.517288 containerd[1726]: time="2025-10-13T05:53:31.517140013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:31.517806 containerd[1726]: time="2025-10-13T05:53:31.517553311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.138173068s" Oct 13 05:53:31.517806 containerd[1726]: time="2025-10-13T05:53:31.517582802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:53:31.519446 containerd[1726]: time="2025-10-13T05:53:31.519423540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:53:31.523660 containerd[1726]: time="2025-10-13T05:53:31.523622239Z" level=info msg="CreateContainer within sandbox \"26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:53:31.548608 containerd[1726]: time="2025-10-13T05:53:31.545889748Z" level=info msg="Container ec8823a497778b70d739500b47eb0b88cfb687b3e6f7faf5f77d1efc4721289e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:31.567721 containerd[1726]: time="2025-10-13T05:53:31.567696088Z" level=info msg="CreateContainer within sandbox \"26311722a1ddf8be80dd10579bee87b017e82e6dd22c319ce28119e34108949f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec8823a497778b70d739500b47eb0b88cfb687b3e6f7faf5f77d1efc4721289e\"" Oct 13 05:53:31.568197 containerd[1726]: time="2025-10-13T05:53:31.568152319Z" level=info msg="StartContainer for \"ec8823a497778b70d739500b47eb0b88cfb687b3e6f7faf5f77d1efc4721289e\"" Oct 13 05:53:31.570098 containerd[1726]: time="2025-10-13T05:53:31.570066777Z" level=info msg="connecting to shim ec8823a497778b70d739500b47eb0b88cfb687b3e6f7faf5f77d1efc4721289e" address="unix:///run/containerd/s/8e369067f36d1c5bda42d1b5e0dee7021b39aa9c0dc0ed2f94bb03d6f30d12a1" protocol=ttrpc version=3 Oct 13 05:53:31.596314 systemd[1]: Started cri-containerd-ec8823a497778b70d739500b47eb0b88cfb687b3e6f7faf5f77d1efc4721289e.scope - libcontainer container ec8823a497778b70d739500b47eb0b88cfb687b3e6f7faf5f77d1efc4721289e. Oct 13 05:53:31.641955 containerd[1726]: time="2025-10-13T05:53:31.641927784Z" level=info msg="StartContainer for \"ec8823a497778b70d739500b47eb0b88cfb687b3e6f7faf5f77d1efc4721289e\" returns successfully" Oct 13 05:53:31.874264 containerd[1726]: time="2025-10-13T05:53:31.873818597Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:31.876836 containerd[1726]: time="2025-10-13T05:53:31.876797855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:53:31.879050 containerd[1726]: time="2025-10-13T05:53:31.879012197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 359.382283ms" Oct 13 05:53:31.879269 containerd[1726]: time="2025-10-13T05:53:31.879135151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:53:31.880762 containerd[1726]: time="2025-10-13T05:53:31.880716704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:53:31.889969 containerd[1726]: time="2025-10-13T05:53:31.889917885Z" level=info msg="CreateContainer within sandbox \"2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:53:31.914185 containerd[1726]: time="2025-10-13T05:53:31.910341456Z" level=info msg="Container 84470f808343d8ef04a68b132df737e4f9065902a6d3cb7de25212666787e118: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:31.930928 containerd[1726]: time="2025-10-13T05:53:31.930899919Z" level=info msg="CreateContainer within sandbox \"2903e417c15cd1d053ce45f4a55c3f614f635eb132a2ad654747bc341375743f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"84470f808343d8ef04a68b132df737e4f9065902a6d3cb7de25212666787e118\"" Oct 13 05:53:31.931392 containerd[1726]: time="2025-10-13T05:53:31.931373083Z" level=info msg="StartContainer for \"84470f808343d8ef04a68b132df737e4f9065902a6d3cb7de25212666787e118\"" Oct 13 05:53:31.932595 containerd[1726]: time="2025-10-13T05:53:31.932511059Z" level=info msg="connecting to shim 84470f808343d8ef04a68b132df737e4f9065902a6d3cb7de25212666787e118" address="unix:///run/containerd/s/02cf43d5acf27522d072d739c78ec5621e98ae42c7b7d65b762afa8f0723fdc4" protocol=ttrpc version=3 Oct 13 05:53:31.959310 systemd[1]: Started cri-containerd-84470f808343d8ef04a68b132df737e4f9065902a6d3cb7de25212666787e118.scope - libcontainer container 84470f808343d8ef04a68b132df737e4f9065902a6d3cb7de25212666787e118. Oct 13 05:53:32.030899 containerd[1726]: time="2025-10-13T05:53:32.030864906Z" level=info msg="StartContainer for \"84470f808343d8ef04a68b132df737e4f9065902a6d3cb7de25212666787e118\" returns successfully" Oct 13 05:53:32.798706 kubelet[3192]: I1013 05:53:32.798674 3192 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:53:32.818488 kubelet[3192]: I1013 05:53:32.817611 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fcd66766-ctwdl" podStartSLOduration=30.462817392 podStartE2EDuration="36.817593615s" podCreationTimestamp="2025-10-13 05:52:56 +0000 UTC" firstStartedPulling="2025-10-13 05:53:25.163667807 +0000 UTC m=+42.650438325" lastFinishedPulling="2025-10-13 05:53:31.518444031 +0000 UTC m=+49.005214548" observedRunningTime="2025-10-13 05:53:31.812903225 +0000 UTC m=+49.299673752" watchObservedRunningTime="2025-10-13 05:53:32.817593615 +0000 UTC m=+50.304364147" Oct 13 05:53:32.820115 kubelet[3192]: I1013 05:53:32.819436 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fcd66766-4rf9v" podStartSLOduration=30.205035721 podStartE2EDuration="36.819420435s" podCreationTimestamp="2025-10-13 05:52:56 +0000 UTC" firstStartedPulling="2025-10-13 05:53:25.265529451 +0000 UTC m=+42.752299975" lastFinishedPulling="2025-10-13 05:53:31.879914172 +0000 UTC m=+49.366684689" observedRunningTime="2025-10-13 05:53:32.816102463 +0000 UTC m=+50.302872984" watchObservedRunningTime="2025-10-13 05:53:32.819420435 +0000 UTC m=+50.306190955" Oct 13 05:53:33.147644 containerd[1726]: time="2025-10-13T05:53:33.147341361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:33.149651 containerd[1726]: time="2025-10-13T05:53:33.149614098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:53:33.154333 containerd[1726]: time="2025-10-13T05:53:33.154279035Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:33.158257 containerd[1726]: time="2025-10-13T05:53:33.158225681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:33.158710 containerd[1726]: time="2025-10-13T05:53:33.158688856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.277940087s" Oct 13 05:53:33.158782 containerd[1726]: time="2025-10-13T05:53:33.158770823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:53:33.159739 containerd[1726]: time="2025-10-13T05:53:33.159615085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:53:33.164949 containerd[1726]: time="2025-10-13T05:53:33.164925898Z" level=info msg="CreateContainer within sandbox \"d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:53:33.184363 containerd[1726]: time="2025-10-13T05:53:33.184330827Z" level=info msg="Container 318ffad0988e05f218cf5ee182b959ca7eb26807abc2bc643820a0a01604f03a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:33.209077 containerd[1726]: time="2025-10-13T05:53:33.209047189Z" level=info msg="CreateContainer within sandbox \"d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"318ffad0988e05f218cf5ee182b959ca7eb26807abc2bc643820a0a01604f03a\"" Oct 13 05:53:33.210195 containerd[1726]: time="2025-10-13T05:53:33.209685081Z" level=info msg="StartContainer for \"318ffad0988e05f218cf5ee182b959ca7eb26807abc2bc643820a0a01604f03a\"" Oct 13 05:53:33.211380 containerd[1726]: time="2025-10-13T05:53:33.211348473Z" level=info msg="connecting to shim 318ffad0988e05f218cf5ee182b959ca7eb26807abc2bc643820a0a01604f03a" address="unix:///run/containerd/s/4190b642ca630b9a2e152cd3fbc6527aa935d099e09e691ce28b7b17f0edcc78" protocol=ttrpc version=3 Oct 13 05:53:33.231317 systemd[1]: Started cri-containerd-318ffad0988e05f218cf5ee182b959ca7eb26807abc2bc643820a0a01604f03a.scope - libcontainer container 318ffad0988e05f218cf5ee182b959ca7eb26807abc2bc643820a0a01604f03a. Oct 13 05:53:33.259996 containerd[1726]: time="2025-10-13T05:53:33.259968618Z" level=info msg="StartContainer for \"318ffad0988e05f218cf5ee182b959ca7eb26807abc2bc643820a0a01604f03a\" returns successfully" Oct 13 05:53:33.802851 kubelet[3192]: I1013 05:53:33.802822 3192 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:53:35.790820 containerd[1726]: time="2025-10-13T05:53:35.790774554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:35.793071 containerd[1726]: time="2025-10-13T05:53:35.793036087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:53:35.795980 containerd[1726]: time="2025-10-13T05:53:35.795924449Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:35.833382 containerd[1726]: time="2025-10-13T05:53:35.833326941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:35.834101 containerd[1726]: time="2025-10-13T05:53:35.833937732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.67368014s" Oct 13 05:53:35.834101 containerd[1726]: time="2025-10-13T05:53:35.833969446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:53:35.836210 containerd[1726]: time="2025-10-13T05:53:35.835355820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:53:35.853521 containerd[1726]: time="2025-10-13T05:53:35.853494948Z" level=info msg="CreateContainer within sandbox \"4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:53:35.897600 containerd[1726]: time="2025-10-13T05:53:35.895339192Z" level=info msg="Container f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:35.915178 containerd[1726]: time="2025-10-13T05:53:35.915150459Z" level=info msg="CreateContainer within sandbox \"4b633d465405becf5595d525091c8790c1562950de668478409f4d6cfe539f8c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\"" Oct 13 05:53:35.916833 containerd[1726]: time="2025-10-13T05:53:35.916807273Z" level=info msg="StartContainer for \"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\"" Oct 13 05:53:35.917735 containerd[1726]: time="2025-10-13T05:53:35.917714785Z" level=info msg="connecting to shim f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e" address="unix:///run/containerd/s/28a0180c4a0e0d4cb6afb7d5efead851d343bca9f29b8da8b984cbdf11078079" protocol=ttrpc version=3 Oct 13 05:53:35.941319 systemd[1]: Started cri-containerd-f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e.scope - libcontainer container f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e. Oct 13 05:53:35.986323 containerd[1726]: time="2025-10-13T05:53:35.986296847Z" level=info msg="StartContainer for \"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\" returns successfully" Oct 13 05:53:36.827868 kubelet[3192]: I1013 05:53:36.827479 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64bc8cdbdb-kghw9" podStartSLOduration=28.938505005 podStartE2EDuration="36.827459852s" podCreationTimestamp="2025-10-13 05:53:00 +0000 UTC" firstStartedPulling="2025-10-13 05:53:27.945773225 +0000 UTC m=+45.432543732" lastFinishedPulling="2025-10-13 05:53:35.834728062 +0000 UTC m=+53.321498579" observedRunningTime="2025-10-13 05:53:36.826718078 +0000 UTC m=+54.313488603" watchObservedRunningTime="2025-10-13 05:53:36.827459852 +0000 UTC m=+54.314230368" Oct 13 05:53:36.858733 containerd[1726]: time="2025-10-13T05:53:36.858648356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\" id:\"3bf33664e022412eaa00bebab07b80d4d89723b8ee4b06283af64e7f37c1c66e\" pid:5566 exited_at:{seconds:1760334816 nanos:858384381}" Oct 13 05:53:37.451743 containerd[1726]: time="2025-10-13T05:53:37.451697885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:37.455245 containerd[1726]: time="2025-10-13T05:53:37.455216674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:53:37.458780 containerd[1726]: time="2025-10-13T05:53:37.458716717Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:37.465368 containerd[1726]: time="2025-10-13T05:53:37.464932207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:53:37.465368 containerd[1726]: time="2025-10-13T05:53:37.465243115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.629856555s" Oct 13 05:53:37.465368 containerd[1726]: time="2025-10-13T05:53:37.465272399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:53:37.471405 containerd[1726]: time="2025-10-13T05:53:37.471373445Z" level=info msg="CreateContainer within sandbox \"d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:53:37.494851 containerd[1726]: time="2025-10-13T05:53:37.494793558Z" level=info msg="Container 36d0d010734bc4d936f89347838970a77096eb978ea110de0187b36e0c5e38fd: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:53:37.501551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772799287.mount: Deactivated successfully. Oct 13 05:53:37.518042 containerd[1726]: time="2025-10-13T05:53:37.518017763Z" level=info msg="CreateContainer within sandbox \"d7e1e6f5d864cda0660776435f2a5854b0702edd5cf4f389125f82b3b2323398\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"36d0d010734bc4d936f89347838970a77096eb978ea110de0187b36e0c5e38fd\"" Oct 13 05:53:37.518497 containerd[1726]: time="2025-10-13T05:53:37.518475854Z" level=info msg="StartContainer for \"36d0d010734bc4d936f89347838970a77096eb978ea110de0187b36e0c5e38fd\"" Oct 13 05:53:37.519907 containerd[1726]: time="2025-10-13T05:53:37.519879399Z" level=info msg="connecting to shim 36d0d010734bc4d936f89347838970a77096eb978ea110de0187b36e0c5e38fd" address="unix:///run/containerd/s/4190b642ca630b9a2e152cd3fbc6527aa935d099e09e691ce28b7b17f0edcc78" protocol=ttrpc version=3 Oct 13 05:53:37.542337 systemd[1]: Started cri-containerd-36d0d010734bc4d936f89347838970a77096eb978ea110de0187b36e0c5e38fd.scope - libcontainer container 36d0d010734bc4d936f89347838970a77096eb978ea110de0187b36e0c5e38fd. Oct 13 05:53:37.581757 containerd[1726]: time="2025-10-13T05:53:37.581728064Z" level=info msg="StartContainer for \"36d0d010734bc4d936f89347838970a77096eb978ea110de0187b36e0c5e38fd\" returns successfully" Oct 13 05:53:37.696146 kubelet[3192]: I1013 05:53:37.696100 3192 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:53:37.696146 kubelet[3192]: I1013 05:53:37.696156 3192 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:53:38.327041 kubelet[3192]: I1013 05:53:38.326855 3192 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:53:38.349385 kubelet[3192]: I1013 05:53:38.348910 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2nwjz" podStartSLOduration=26.212515804 podStartE2EDuration="38.348892124s" podCreationTimestamp="2025-10-13 05:53:00 +0000 UTC" firstStartedPulling="2025-10-13 05:53:25.329708194 +0000 UTC m=+42.816478711" lastFinishedPulling="2025-10-13 05:53:37.466084507 +0000 UTC m=+54.952855031" observedRunningTime="2025-10-13 05:53:37.829305433 +0000 UTC m=+55.316075958" watchObservedRunningTime="2025-10-13 05:53:38.348892124 +0000 UTC m=+55.835662647" Oct 13 05:53:48.801943 containerd[1726]: time="2025-10-13T05:53:48.801898345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\" id:\"ea7503c96e6c39a67c71f6610fe37458859e6385c2c528ba15476341abb38dd1\" pid:5641 exited_at:{seconds:1760334828 nanos:801603735}" Oct 13 05:53:57.040721 containerd[1726]: time="2025-10-13T05:53:57.040598841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"db4cd200ee36f3a540f06b9d13469a65833d097cf36ca1e2852fccb491311e0b\" pid:5669 exited_at:{seconds:1760334837 nanos:39982649}" Oct 13 05:54:00.853871 containerd[1726]: time="2025-10-13T05:54:00.853823804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"f6bdc488ecbed3909ad36d2d3cbb969cc6d26cd0512b0ce179551288e3afdcc7\" pid:5698 exited_at:{seconds:1760334840 nanos:853395730}" Oct 13 05:54:06.864819 containerd[1726]: time="2025-10-13T05:54:06.864773122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\" id:\"61e48b269a405f40957ce5c198399b0eeb527a841bf1d330900e16c7ae800896\" pid:5721 exited_at:{seconds:1760334846 nanos:864376860}" Oct 13 05:54:09.473936 kubelet[3192]: I1013 05:54:09.473695 3192 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:54:18.922258 containerd[1726]: time="2025-10-13T05:54:18.922211201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\" id:\"fa1c217a70146583285b597b3529b3bc41d43301df0eee6bf3013f5a88a78dc8\" pid:5749 exited_at:{seconds:1760334858 nanos:921912809}" Oct 13 05:54:29.927967 systemd[1]: Started sshd@7-10.200.4.24:22-10.200.16.10:49058.service - OpenSSH per-connection server daemon (10.200.16.10:49058). Oct 13 05:54:30.537001 sshd[5764]: Accepted publickey for core from 10.200.16.10 port 49058 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:54:30.538318 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:54:30.546803 systemd-logind[1700]: New session 10 of user core. Oct 13 05:54:30.554540 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:54:31.051102 containerd[1726]: time="2025-10-13T05:54:31.050945548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"32b724463938c86fa8cb1db03cbf38910be70ab151e1eecc60d8ff108f7fad54\" pid:5781 exited_at:{seconds:1760334871 nanos:49702913}" Oct 13 05:54:31.080864 sshd[5768]: Connection closed by 10.200.16.10 port 49058 Oct 13 05:54:31.081517 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Oct 13 05:54:31.085776 systemd[1]: sshd@7-10.200.4.24:22-10.200.16.10:49058.service: Deactivated successfully. Oct 13 05:54:31.087920 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:54:31.088814 systemd-logind[1700]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:54:31.090538 systemd-logind[1700]: Removed session 10. Oct 13 05:54:36.195447 systemd[1]: Started sshd@8-10.200.4.24:22-10.200.16.10:48638.service - OpenSSH per-connection server daemon (10.200.16.10:48638). Oct 13 05:54:36.800509 sshd[5804]: Accepted publickey for core from 10.200.16.10 port 48638 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:54:36.801867 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:54:36.807678 systemd-logind[1700]: New session 11 of user core. Oct 13 05:54:36.815498 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:54:36.917190 containerd[1726]: time="2025-10-13T05:54:36.917101544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\" id:\"46dbc48d6b6e55e7659f5c6e586c7561e75faf56e0ce38ddbec7ea67d94f0a60\" pid:5820 exited_at:{seconds:1760334876 nanos:916828126}" Oct 13 05:54:37.321307 sshd[5812]: Connection closed by 10.200.16.10 port 48638 Oct 13 05:54:37.323540 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Oct 13 05:54:37.327678 systemd-logind[1700]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:54:37.329200 systemd[1]: sshd@8-10.200.4.24:22-10.200.16.10:48638.service: Deactivated successfully. Oct 13 05:54:37.332118 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:54:37.337736 systemd-logind[1700]: Removed session 11. Oct 13 05:54:41.679314 containerd[1726]: time="2025-10-13T05:54:41.679262924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\" id:\"cb91de89c7892937caac60daf4af32f89aa6271918cb32a0495e1ed43d56bfdb\" pid:5860 exited_at:{seconds:1760334881 nanos:677583956}" Oct 13 05:54:42.432476 systemd[1]: Started sshd@9-10.200.4.24:22-10.200.16.10:38568.service - OpenSSH per-connection server daemon (10.200.16.10:38568). Oct 13 05:54:43.040484 sshd[5870]: Accepted publickey for core from 10.200.16.10 port 38568 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:54:43.042110 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:54:43.047443 systemd-logind[1700]: New session 12 of user core. Oct 13 05:54:43.053506 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:54:43.525345 sshd[5876]: Connection closed by 10.200.16.10 port 38568 Oct 13 05:54:43.525897 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Oct 13 05:54:43.529327 systemd[1]: sshd@9-10.200.4.24:22-10.200.16.10:38568.service: Deactivated successfully. Oct 13 05:54:43.531115 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:54:43.531866 systemd-logind[1700]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:54:43.533014 systemd-logind[1700]: Removed session 12. Oct 13 05:54:43.631208 systemd[1]: Started sshd@10-10.200.4.24:22-10.200.16.10:38582.service - OpenSSH per-connection server daemon (10.200.16.10:38582). Oct 13 05:54:44.235303 sshd[5889]: Accepted publickey for core from 10.200.16.10 port 38582 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:54:44.236401 sshd-session[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:54:44.240750 systemd-logind[1700]: New session 13 of user core. Oct 13 05:54:44.248304 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:54:44.737358 sshd[5892]: Connection closed by 10.200.16.10 port 38582 Oct 13 05:54:44.738416 sshd-session[5889]: pam_unix(sshd:session): session closed for user core Oct 13 05:54:44.741858 systemd-logind[1700]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:54:44.742396 systemd[1]: sshd@10-10.200.4.24:22-10.200.16.10:38582.service: Deactivated successfully. Oct 13 05:54:44.744269 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:54:44.745819 systemd-logind[1700]: Removed session 13. Oct 13 05:54:44.844345 systemd[1]: Started sshd@11-10.200.4.24:22-10.200.16.10:38592.service - OpenSSH per-connection server daemon (10.200.16.10:38592). Oct 13 05:54:45.447271 sshd[5902]: Accepted publickey for core from 10.200.16.10 port 38592 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:54:45.448416 sshd-session[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:54:45.452235 systemd-logind[1700]: New session 14 of user core. Oct 13 05:54:45.457311 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:54:45.932344 sshd[5905]: Connection closed by 10.200.16.10 port 38592 Oct 13 05:54:45.933083 sshd-session[5902]: pam_unix(sshd:session): session closed for user core Oct 13 05:54:45.936530 systemd-logind[1700]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:54:45.937064 systemd[1]: sshd@11-10.200.4.24:22-10.200.16.10:38592.service: Deactivated successfully. Oct 13 05:54:45.938988 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:54:45.940330 systemd-logind[1700]: Removed session 14. Oct 13 05:54:48.799564 containerd[1726]: time="2025-10-13T05:54:48.799520189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\" id:\"f4f2f08bd65e5d287f1ee0c86f84be2559c56579d4170d9c7f8872f2c3bdbd37\" pid:5937 exit_status:1 exited_at:{seconds:1760334888 nanos:799137756}" Oct 13 05:54:51.059151 systemd[1]: Started sshd@12-10.200.4.24:22-10.200.16.10:59988.service - OpenSSH per-connection server daemon (10.200.16.10:59988). Oct 13 05:54:51.661692 sshd[5950]: Accepted publickey for core from 10.200.16.10 port 59988 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:54:51.662995 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:54:51.668337 systemd-logind[1700]: New session 15 of user core. Oct 13 05:54:51.673303 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:54:52.146445 sshd[5953]: Connection closed by 10.200.16.10 port 59988 Oct 13 05:54:52.147039 sshd-session[5950]: pam_unix(sshd:session): session closed for user core Oct 13 05:54:52.150400 systemd[1]: sshd@12-10.200.4.24:22-10.200.16.10:59988.service: Deactivated successfully. Oct 13 05:54:52.152535 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:54:52.153715 systemd-logind[1700]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:54:52.154817 systemd-logind[1700]: Removed session 15. Oct 13 05:54:56.876996 containerd[1726]: time="2025-10-13T05:54:56.876937162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"b19124cadf342a47fb9fd335649d09bdf8e15991037d811c6106d112aba5335e\" pid:5998 exited_at:{seconds:1760334896 nanos:876672326}" Oct 13 05:54:57.259878 systemd[1]: Started sshd@13-10.200.4.24:22-10.200.16.10:60002.service - OpenSSH per-connection server daemon (10.200.16.10:60002). Oct 13 05:54:57.875753 sshd[6009]: Accepted publickey for core from 10.200.16.10 port 60002 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:54:57.877969 sshd-session[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:54:57.882998 systemd-logind[1700]: New session 16 of user core. Oct 13 05:54:57.888327 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:54:58.375057 sshd[6012]: Connection closed by 10.200.16.10 port 60002 Oct 13 05:54:58.376856 sshd-session[6009]: pam_unix(sshd:session): session closed for user core Oct 13 05:54:58.379423 systemd[1]: sshd@13-10.200.4.24:22-10.200.16.10:60002.service: Deactivated successfully. Oct 13 05:54:58.381510 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:54:58.385751 systemd-logind[1700]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:54:58.387189 systemd-logind[1700]: Removed session 16. Oct 13 05:55:00.858667 containerd[1726]: time="2025-10-13T05:55:00.858621495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"a9c0f11ef6857b11bc0b4c2c4422cb2250a9bfe7157ff74031a34ff9cf18fd4d\" pid:6035 exited_at:{seconds:1760334900 nanos:858323714}" Oct 13 05:55:03.487078 systemd[1]: Started sshd@14-10.200.4.24:22-10.200.16.10:56640.service - OpenSSH per-connection server daemon (10.200.16.10:56640). Oct 13 05:55:04.096704 sshd[6046]: Accepted publickey for core from 10.200.16.10 port 56640 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:04.097869 sshd-session[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:04.102323 systemd-logind[1700]: New session 17 of user core. Oct 13 05:55:04.106315 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:55:04.584872 sshd[6049]: Connection closed by 10.200.16.10 port 56640 Oct 13 05:55:04.586348 sshd-session[6046]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:04.589723 systemd-logind[1700]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:55:04.590324 systemd[1]: sshd@14-10.200.4.24:22-10.200.16.10:56640.service: Deactivated successfully. Oct 13 05:55:04.592132 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:55:04.593869 systemd-logind[1700]: Removed session 17. Oct 13 05:55:04.695435 systemd[1]: Started sshd@15-10.200.4.24:22-10.200.16.10:56648.service - OpenSSH per-connection server daemon (10.200.16.10:56648). Oct 13 05:55:05.300536 sshd[6061]: Accepted publickey for core from 10.200.16.10 port 56648 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:05.301958 sshd-session[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:05.307529 systemd-logind[1700]: New session 18 of user core. Oct 13 05:55:05.313358 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:55:05.847141 sshd[6064]: Connection closed by 10.200.16.10 port 56648 Oct 13 05:55:05.847672 sshd-session[6061]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:05.851606 systemd[1]: sshd@15-10.200.4.24:22-10.200.16.10:56648.service: Deactivated successfully. Oct 13 05:55:05.853951 systemd-logind[1700]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:55:05.854818 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:55:05.859464 systemd-logind[1700]: Removed session 18. Oct 13 05:55:05.959220 systemd[1]: Started sshd@16-10.200.4.24:22-10.200.16.10:56660.service - OpenSSH per-connection server daemon (10.200.16.10:56660). Oct 13 05:55:06.584194 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 56660 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:06.584672 sshd-session[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:06.590376 systemd-logind[1700]: New session 19 of user core. Oct 13 05:55:06.594560 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:55:06.862262 containerd[1726]: time="2025-10-13T05:55:06.862135185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\" id:\"78a23db00b40540607a647dd73ccd07ab30c8b522baab79207612dead3f1ed7b\" pid:6089 exited_at:{seconds:1760334906 nanos:861763537}" Oct 13 05:55:07.678211 sshd[6076]: Connection closed by 10.200.16.10 port 56660 Oct 13 05:55:07.677353 sshd-session[6073]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:07.682786 systemd[1]: sshd@16-10.200.4.24:22-10.200.16.10:56660.service: Deactivated successfully. Oct 13 05:55:07.684231 systemd-logind[1700]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:55:07.685375 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:55:07.688023 systemd-logind[1700]: Removed session 19. Oct 13 05:55:07.781393 systemd[1]: Started sshd@17-10.200.4.24:22-10.200.16.10:56666.service - OpenSSH per-connection server daemon (10.200.16.10:56666). Oct 13 05:55:08.391154 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 56666 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:08.393299 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:08.401476 systemd-logind[1700]: New session 20 of user core. Oct 13 05:55:08.408351 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:55:09.005264 sshd[6117]: Connection closed by 10.200.16.10 port 56666 Oct 13 05:55:09.006677 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:09.010560 systemd-logind[1700]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:55:09.011475 systemd[1]: sshd@17-10.200.4.24:22-10.200.16.10:56666.service: Deactivated successfully. Oct 13 05:55:09.013315 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:55:09.014788 systemd-logind[1700]: Removed session 20. Oct 13 05:55:09.114271 systemd[1]: Started sshd@18-10.200.4.24:22-10.200.16.10:56674.service - OpenSSH per-connection server daemon (10.200.16.10:56674). Oct 13 05:55:09.715792 sshd[6127]: Accepted publickey for core from 10.200.16.10 port 56674 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:09.716947 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:09.721224 systemd-logind[1700]: New session 21 of user core. Oct 13 05:55:09.726307 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:55:10.198105 sshd[6130]: Connection closed by 10.200.16.10 port 56674 Oct 13 05:55:10.198687 sshd-session[6127]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:10.201950 systemd[1]: sshd@18-10.200.4.24:22-10.200.16.10:56674.service: Deactivated successfully. Oct 13 05:55:10.203807 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:55:10.204625 systemd-logind[1700]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:55:10.205670 systemd-logind[1700]: Removed session 21. Oct 13 05:55:15.309236 systemd[1]: Started sshd@19-10.200.4.24:22-10.200.16.10:54066.service - OpenSSH per-connection server daemon (10.200.16.10:54066). Oct 13 05:55:15.910229 sshd[6144]: Accepted publickey for core from 10.200.16.10 port 54066 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:15.911405 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:15.915719 systemd-logind[1700]: New session 22 of user core. Oct 13 05:55:15.919366 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:55:16.391627 sshd[6147]: Connection closed by 10.200.16.10 port 54066 Oct 13 05:55:16.392223 sshd-session[6144]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:16.395672 systemd[1]: sshd@19-10.200.4.24:22-10.200.16.10:54066.service: Deactivated successfully. Oct 13 05:55:16.397638 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:55:16.398356 systemd-logind[1700]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:55:16.399605 systemd-logind[1700]: Removed session 22. Oct 13 05:55:18.803083 containerd[1726]: time="2025-10-13T05:55:18.803033353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a3c814289069c70336036b5ee58b407267e4f6c41c3969105094ead494e1c5e\" id:\"d7b67144a5510b7cf77b482c4424574c1550eb5a465a54bd185b3a322531792e\" pid:6171 exited_at:{seconds:1760334918 nanos:802754353}" Oct 13 05:55:21.498272 systemd[1]: Started sshd@20-10.200.4.24:22-10.200.16.10:58422.service - OpenSSH per-connection server daemon (10.200.16.10:58422). Oct 13 05:55:22.098192 sshd[6183]: Accepted publickey for core from 10.200.16.10 port 58422 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:22.099334 sshd-session[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:22.104159 systemd-logind[1700]: New session 23 of user core. Oct 13 05:55:22.109334 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:55:22.583054 sshd[6186]: Connection closed by 10.200.16.10 port 58422 Oct 13 05:55:22.584350 sshd-session[6183]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:22.587629 systemd-logind[1700]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:55:22.588256 systemd[1]: sshd@20-10.200.4.24:22-10.200.16.10:58422.service: Deactivated successfully. Oct 13 05:55:22.590060 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:55:22.591622 systemd-logind[1700]: Removed session 23. Oct 13 05:55:27.693495 systemd[1]: Started sshd@21-10.200.4.24:22-10.200.16.10:58424.service - OpenSSH per-connection server daemon (10.200.16.10:58424). Oct 13 05:55:28.301952 sshd[6198]: Accepted publickey for core from 10.200.16.10 port 58424 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:28.303108 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:28.308850 systemd-logind[1700]: New session 24 of user core. Oct 13 05:55:28.315330 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:55:28.783807 sshd[6201]: Connection closed by 10.200.16.10 port 58424 Oct 13 05:55:28.784381 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:28.787908 systemd-logind[1700]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:55:28.788050 systemd[1]: sshd@21-10.200.4.24:22-10.200.16.10:58424.service: Deactivated successfully. Oct 13 05:55:28.789991 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:55:28.791583 systemd-logind[1700]: Removed session 24. Oct 13 05:55:30.855587 containerd[1726]: time="2025-10-13T05:55:30.855538240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b32a638be5745fb36297bc2ba1565972cb01fe2546b5645543ebd397c3613b45\" id:\"f8b750cb80c90036ba51a36eff31e8826fc0d354b4779de47783fb073dfd4651\" pid:6224 exited_at:{seconds:1760334930 nanos:855238774}" Oct 13 05:55:33.897458 systemd[1]: Started sshd@22-10.200.4.24:22-10.200.16.10:41516.service - OpenSSH per-connection server daemon (10.200.16.10:41516). Oct 13 05:55:34.508482 sshd[6236]: Accepted publickey for core from 10.200.16.10 port 41516 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:34.509653 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:34.514035 systemd-logind[1700]: New session 25 of user core. Oct 13 05:55:34.521320 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:55:34.986274 sshd[6239]: Connection closed by 10.200.16.10 port 41516 Oct 13 05:55:34.986801 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:34.990103 systemd[1]: sshd@22-10.200.4.24:22-10.200.16.10:41516.service: Deactivated successfully. Oct 13 05:55:34.991799 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:55:34.992931 systemd-logind[1700]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:55:34.994489 systemd-logind[1700]: Removed session 25. Oct 13 05:55:36.846275 containerd[1726]: time="2025-10-13T05:55:36.846208726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\" id:\"632aec66ae6c0bbf267eb62898dc31d0b0f02e568c7a14d37be5ecfbf1477c81\" pid:6264 exited_at:{seconds:1760334936 nanos:845834455}" Oct 13 05:55:40.097601 systemd[1]: Started sshd@23-10.200.4.24:22-10.200.16.10:41524.service - OpenSSH per-connection server daemon (10.200.16.10:41524). Oct 13 05:55:40.709841 sshd[6275]: Accepted publickey for core from 10.200.16.10 port 41524 ssh2: RSA SHA256:M+0CUiaS6X9LgLQwLsgIf7RWJ99AxYshiQ2j8fMZKDQ Oct 13 05:55:40.710977 sshd-session[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:40.715246 systemd-logind[1700]: New session 26 of user core. Oct 13 05:55:40.724306 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 05:55:41.192815 sshd[6278]: Connection closed by 10.200.16.10 port 41524 Oct 13 05:55:41.193650 sshd-session[6275]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:41.197107 systemd-logind[1700]: Session 26 logged out. Waiting for processes to exit. Oct 13 05:55:41.197276 systemd[1]: sshd@23-10.200.4.24:22-10.200.16.10:41524.service: Deactivated successfully. Oct 13 05:55:41.199266 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 05:55:41.200591 systemd-logind[1700]: Removed session 26. Oct 13 05:55:41.677461 containerd[1726]: time="2025-10-13T05:55:41.677324398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f828228d6e0b30f7653188c57350bfc7e0bd423dc44cb4e5ee8b13d857edc77e\" id:\"06dbe4d3f7ced8995b9a57681d104f4b6707a40fa41514ea497a229b247fd97c\" pid:6301 exited_at:{seconds:1760334941 nanos:676934991}"