Aug 19 08:08:40.861203 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 18 22:19:37 -00 2025 Aug 19 08:08:40.861233 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:08:40.861245 kernel: BIOS-provided physical RAM map: Aug 19 08:08:40.861252 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Aug 19 08:08:40.861258 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Aug 19 08:08:40.861265 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Aug 19 08:08:40.861273 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Aug 19 08:08:40.861279 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Aug 19 08:08:40.861289 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Aug 19 08:08:40.861296 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Aug 19 08:08:40.861303 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Aug 19 08:08:40.861311 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Aug 19 08:08:40.861318 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Aug 19 08:08:40.861325 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Aug 19 08:08:40.861333 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Aug 19 08:08:40.861340 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Aug 19 08:08:40.861352 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 19 08:08:40.861359 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:08:40.861366 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 19 08:08:40.861373 kernel: NX (Execute Disable) protection: active Aug 19 08:08:40.861380 kernel: APIC: Static calls initialized Aug 19 08:08:40.861388 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Aug 19 08:08:40.861395 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Aug 19 08:08:40.861402 kernel: extended physical RAM map: Aug 19 08:08:40.861409 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Aug 19 08:08:40.861417 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Aug 19 08:08:40.861424 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Aug 19 08:08:40.861433 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Aug 19 08:08:40.861440 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Aug 19 08:08:40.861448 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Aug 19 08:08:40.861455 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Aug 19 08:08:40.861462 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Aug 19 08:08:40.861469 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Aug 19 08:08:40.861476 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Aug 19 08:08:40.861483 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Aug 19 08:08:40.861491 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Aug 19 08:08:40.861498 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Aug 19 08:08:40.861505 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Aug 19 08:08:40.861514 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Aug 19 08:08:40.861522 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Aug 19 08:08:40.861533 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Aug 19 08:08:40.861540 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 19 08:08:40.861547 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:08:40.861555 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 19 08:08:40.861564 kernel: efi: EFI v2.7 by EDK II Aug 19 08:08:40.861572 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Aug 19 08:08:40.861579 kernel: random: crng init done Aug 19 08:08:40.861587 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Aug 19 08:08:40.861594 kernel: secureboot: Secure boot enabled Aug 19 08:08:40.861602 kernel: SMBIOS 2.8 present. Aug 19 08:08:40.861610 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Aug 19 08:08:40.861617 kernel: DMI: Memory slots populated: 1/1 Aug 19 08:08:40.861625 kernel: Hypervisor detected: KVM Aug 19 08:08:40.861632 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 19 08:08:40.861640 kernel: kvm-clock: using sched offset of 6457764569 cycles Aug 19 08:08:40.861650 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 19 08:08:40.861658 kernel: tsc: Detected 2794.750 MHz processor Aug 19 08:08:40.861666 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 19 08:08:40.861673 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 19 08:08:40.861681 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Aug 19 08:08:40.861689 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 19 08:08:40.861699 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 19 08:08:40.861708 kernel: Using GB pages for direct mapping Aug 19 08:08:40.861718 kernel: ACPI: Early table checksum verification disabled Aug 19 08:08:40.861728 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Aug 19 08:08:40.861735 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 19 08:08:40.861743 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:40.861751 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:40.861758 kernel: ACPI: FACS 0x000000009BBDD000 000040 Aug 19 08:08:40.861766 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:40.861774 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:40.861781 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:40.861789 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:40.861799 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 19 08:08:40.861806 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Aug 19 08:08:40.861814 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Aug 19 08:08:40.861822 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Aug 19 08:08:40.861829 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Aug 19 08:08:40.861837 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Aug 19 08:08:40.861844 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Aug 19 08:08:40.861852 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Aug 19 08:08:40.861862 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Aug 19 08:08:40.861869 kernel: No NUMA configuration found Aug 19 08:08:40.861877 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Aug 19 08:08:40.861885 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Aug 19 08:08:40.861892 kernel: Zone ranges: Aug 19 08:08:40.861900 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 19 08:08:40.861908 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Aug 19 08:08:40.861990 kernel: Normal empty Aug 19 08:08:40.862001 kernel: Device empty Aug 19 08:08:40.862008 kernel: Movable zone start for each node Aug 19 08:08:40.862020 kernel: Early memory node ranges Aug 19 08:08:40.862027 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Aug 19 08:08:40.862035 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Aug 19 08:08:40.862043 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Aug 19 08:08:40.862050 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Aug 19 08:08:40.862058 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Aug 19 08:08:40.862065 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Aug 19 08:08:40.862073 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 19 08:08:40.862081 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Aug 19 08:08:40.862090 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 19 08:08:40.862100 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 19 08:08:40.862110 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Aug 19 08:08:40.862120 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Aug 19 08:08:40.862129 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 19 08:08:40.862139 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 19 08:08:40.862149 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 19 08:08:40.862160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 19 08:08:40.862173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 19 08:08:40.862188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 19 08:08:40.862195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 19 08:08:40.862203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 19 08:08:40.862211 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 19 08:08:40.862218 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 19 08:08:40.862226 kernel: TSC deadline timer available Aug 19 08:08:40.862234 kernel: CPU topo: Max. logical packages: 1 Aug 19 08:08:40.862241 kernel: CPU topo: Max. logical dies: 1 Aug 19 08:08:40.862249 kernel: CPU topo: Max. dies per package: 1 Aug 19 08:08:40.862265 kernel: CPU topo: Max. threads per core: 1 Aug 19 08:08:40.862273 kernel: CPU topo: Num. cores per package: 4 Aug 19 08:08:40.862281 kernel: CPU topo: Num. threads per package: 4 Aug 19 08:08:40.862291 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Aug 19 08:08:40.862301 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 19 08:08:40.862309 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 19 08:08:40.862317 kernel: kvm-guest: setup PV sched yield Aug 19 08:08:40.862325 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Aug 19 08:08:40.862335 kernel: Booting paravirtualized kernel on KVM Aug 19 08:08:40.862343 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 19 08:08:40.862352 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 19 08:08:40.862360 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Aug 19 08:08:40.862368 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Aug 19 08:08:40.862375 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 19 08:08:40.862383 kernel: kvm-guest: PV spinlocks enabled Aug 19 08:08:40.862391 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 19 08:08:40.862400 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:08:40.862411 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 08:08:40.862419 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 08:08:40.862427 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 08:08:40.862435 kernel: Fallback order for Node 0: 0 Aug 19 08:08:40.862443 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Aug 19 08:08:40.862451 kernel: Policy zone: DMA32 Aug 19 08:08:40.862459 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 08:08:40.862467 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 19 08:08:40.862477 kernel: ftrace: allocating 40101 entries in 157 pages Aug 19 08:08:40.862485 kernel: ftrace: allocated 157 pages with 5 groups Aug 19 08:08:40.862498 kernel: Dynamic Preempt: voluntary Aug 19 08:08:40.862506 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 08:08:40.862515 kernel: rcu: RCU event tracing is enabled. Aug 19 08:08:40.862524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 19 08:08:40.862532 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 08:08:40.862540 kernel: Rude variant of Tasks RCU enabled. Aug 19 08:08:40.862548 kernel: Tracing variant of Tasks RCU enabled. Aug 19 08:08:40.862558 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 08:08:40.862566 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 19 08:08:40.862574 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:08:40.862582 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:08:40.862593 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:08:40.862602 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 19 08:08:40.862610 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 08:08:40.862617 kernel: Console: colour dummy device 80x25 Aug 19 08:08:40.862625 kernel: printk: legacy console [ttyS0] enabled Aug 19 08:08:40.862635 kernel: ACPI: Core revision 20240827 Aug 19 08:08:40.862643 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 19 08:08:40.862651 kernel: APIC: Switch to symmetric I/O mode setup Aug 19 08:08:40.862659 kernel: x2apic enabled Aug 19 08:08:40.862667 kernel: APIC: Switched APIC routing to: physical x2apic Aug 19 08:08:40.862675 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 19 08:08:40.862683 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 19 08:08:40.862691 kernel: kvm-guest: setup PV IPIs Aug 19 08:08:40.862699 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 19 08:08:40.862709 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 19 08:08:40.862717 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 19 08:08:40.862725 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 19 08:08:40.862733 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 19 08:08:40.862741 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 19 08:08:40.862752 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 19 08:08:40.862760 kernel: Spectre V2 : Mitigation: Retpolines Aug 19 08:08:40.862768 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 19 08:08:40.862776 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 19 08:08:40.862786 kernel: RETBleed: Mitigation: untrained return thunk Aug 19 08:08:40.862794 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 19 08:08:40.862802 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 19 08:08:40.862810 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 19 08:08:40.862819 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 19 08:08:40.862827 kernel: x86/bugs: return thunk changed Aug 19 08:08:40.862835 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 19 08:08:40.862843 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 19 08:08:40.862852 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 19 08:08:40.862861 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 19 08:08:40.862868 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 19 08:08:40.862880 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 19 08:08:40.862888 kernel: Freeing SMP alternatives memory: 32K Aug 19 08:08:40.862896 kernel: pid_max: default: 32768 minimum: 301 Aug 19 08:08:40.862904 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 08:08:40.862912 kernel: landlock: Up and running. Aug 19 08:08:40.862940 kernel: SELinux: Initializing. Aug 19 08:08:40.862952 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 08:08:40.862960 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 08:08:40.862968 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 19 08:08:40.862976 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 19 08:08:40.862984 kernel: ... version: 0 Aug 19 08:08:40.862992 kernel: ... bit width: 48 Aug 19 08:08:40.863003 kernel: ... generic registers: 6 Aug 19 08:08:40.863010 kernel: ... value mask: 0000ffffffffffff Aug 19 08:08:40.863018 kernel: ... max period: 00007fffffffffff Aug 19 08:08:40.863028 kernel: ... fixed-purpose events: 0 Aug 19 08:08:40.863036 kernel: ... event mask: 000000000000003f Aug 19 08:08:40.863044 kernel: signal: max sigframe size: 1776 Aug 19 08:08:40.863061 kernel: rcu: Hierarchical SRCU implementation. Aug 19 08:08:40.863070 kernel: rcu: Max phase no-delay instances is 400. Aug 19 08:08:40.863079 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 08:08:40.863087 kernel: smp: Bringing up secondary CPUs ... Aug 19 08:08:40.863095 kernel: smpboot: x86: Booting SMP configuration: Aug 19 08:08:40.863102 kernel: .... node #0, CPUs: #1 #2 #3 Aug 19 08:08:40.863110 kernel: smp: Brought up 1 node, 4 CPUs Aug 19 08:08:40.863121 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 19 08:08:40.863130 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54040K init, 2928K bss, 137064K reserved, 0K cma-reserved) Aug 19 08:08:40.863138 kernel: devtmpfs: initialized Aug 19 08:08:40.863154 kernel: x86/mm: Memory block size: 128MB Aug 19 08:08:40.863169 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Aug 19 08:08:40.863178 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Aug 19 08:08:40.863186 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 08:08:40.863194 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 19 08:08:40.863205 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 08:08:40.863213 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 08:08:40.863221 kernel: audit: initializing netlink subsys (disabled) Aug 19 08:08:40.863229 kernel: audit: type=2000 audit(1755590918.473:1): state=initialized audit_enabled=0 res=1 Aug 19 08:08:40.863237 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 08:08:40.863252 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 19 08:08:40.863261 kernel: cpuidle: using governor menu Aug 19 08:08:40.863269 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 08:08:40.863277 kernel: dca service started, version 1.12.1 Aug 19 08:08:40.863288 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Aug 19 08:08:40.863296 kernel: PCI: Using configuration type 1 for base access Aug 19 08:08:40.863304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 19 08:08:40.863312 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 08:08:40.863320 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 08:08:40.863328 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 08:08:40.863336 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 08:08:40.863344 kernel: ACPI: Added _OSI(Module Device) Aug 19 08:08:40.863352 kernel: ACPI: Added _OSI(Processor Device) Aug 19 08:08:40.863362 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 08:08:40.863370 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 08:08:40.863378 kernel: ACPI: Interpreter enabled Aug 19 08:08:40.863386 kernel: ACPI: PM: (supports S0 S5) Aug 19 08:08:40.863396 kernel: ACPI: Using IOAPIC for interrupt routing Aug 19 08:08:40.863404 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 19 08:08:40.863412 kernel: PCI: Using E820 reservations for host bridge windows Aug 19 08:08:40.863420 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 19 08:08:40.863430 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 08:08:40.863686 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 08:08:40.863815 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 19 08:08:40.863961 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 19 08:08:40.863973 kernel: PCI host bridge to bus 0000:00 Aug 19 08:08:40.864113 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 19 08:08:40.864233 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 19 08:08:40.864348 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 19 08:08:40.864459 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Aug 19 08:08:40.864569 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Aug 19 08:08:40.864679 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Aug 19 08:08:40.864793 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 08:08:40.865088 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 19 08:08:40.865252 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 19 08:08:40.865461 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Aug 19 08:08:40.865612 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Aug 19 08:08:40.865733 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Aug 19 08:08:40.865858 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 19 08:08:40.866036 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 08:08:40.866185 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Aug 19 08:08:40.866308 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Aug 19 08:08:40.866473 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Aug 19 08:08:40.866761 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 19 08:08:40.866957 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Aug 19 08:08:40.867288 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Aug 19 08:08:40.867413 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Aug 19 08:08:40.867568 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 19 08:08:40.867703 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Aug 19 08:08:40.867825 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Aug 19 08:08:40.867977 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Aug 19 08:08:40.868141 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Aug 19 08:08:40.868285 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 19 08:08:40.868410 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 19 08:08:40.868546 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 19 08:08:40.868679 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Aug 19 08:08:40.868800 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Aug 19 08:08:40.868963 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 19 08:08:40.869151 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Aug 19 08:08:40.869170 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 19 08:08:40.869183 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 19 08:08:40.869202 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 19 08:08:40.869225 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 19 08:08:40.869239 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 19 08:08:40.869260 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 19 08:08:40.869276 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 19 08:08:40.869284 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 19 08:08:40.869292 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 19 08:08:40.869304 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 19 08:08:40.869312 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 19 08:08:40.869320 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 19 08:08:40.869339 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 19 08:08:40.869348 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 19 08:08:40.869359 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 19 08:08:40.869367 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 19 08:08:40.869377 kernel: iommu: Default domain type: Translated Aug 19 08:08:40.869385 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 19 08:08:40.869394 kernel: efivars: Registered efivars operations Aug 19 08:08:40.869402 kernel: PCI: Using ACPI for IRQ routing Aug 19 08:08:40.869410 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 19 08:08:40.869419 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Aug 19 08:08:40.869430 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Aug 19 08:08:40.869438 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Aug 19 08:08:40.869446 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Aug 19 08:08:40.869454 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Aug 19 08:08:40.869583 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 19 08:08:40.869706 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 19 08:08:40.869960 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 19 08:08:40.869996 kernel: vgaarb: loaded Aug 19 08:08:40.870025 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 19 08:08:40.870041 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 19 08:08:40.870053 kernel: clocksource: Switched to clocksource kvm-clock Aug 19 08:08:40.870061 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 08:08:40.870069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 08:08:40.870078 kernel: pnp: PnP ACPI init Aug 19 08:08:40.870603 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Aug 19 08:08:40.870618 kernel: pnp: PnP ACPI: found 6 devices Aug 19 08:08:40.870630 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 19 08:08:40.870639 kernel: NET: Registered PF_INET protocol family Aug 19 08:08:40.870647 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 08:08:40.870656 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 08:08:40.870664 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 08:08:40.870672 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 08:08:40.870680 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 08:08:40.870689 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 08:08:40.870697 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 08:08:40.870708 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 08:08:40.870716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 08:08:40.870724 kernel: NET: Registered PF_XDP protocol family Aug 19 08:08:40.870849 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Aug 19 08:08:40.871002 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Aug 19 08:08:40.871209 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 19 08:08:40.871343 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 19 08:08:40.871460 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 19 08:08:40.871581 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Aug 19 08:08:40.871708 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Aug 19 08:08:40.871821 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Aug 19 08:08:40.871831 kernel: PCI: CLS 0 bytes, default 64 Aug 19 08:08:40.871840 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 19 08:08:40.871849 kernel: Initialise system trusted keyrings Aug 19 08:08:40.871857 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 08:08:40.871866 kernel: Key type asymmetric registered Aug 19 08:08:40.871874 kernel: Asymmetric key parser 'x509' registered Aug 19 08:08:40.871886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 19 08:08:40.871910 kernel: io scheduler mq-deadline registered Aug 19 08:08:40.871944 kernel: io scheduler kyber registered Aug 19 08:08:40.871952 kernel: io scheduler bfq registered Aug 19 08:08:40.871961 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 19 08:08:40.871970 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 19 08:08:40.871979 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 19 08:08:40.871987 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 19 08:08:40.871996 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 08:08:40.872007 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 19 08:08:40.872015 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 19 08:08:40.872024 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 19 08:08:40.872032 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 19 08:08:40.872174 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 19 08:08:40.872191 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 19 08:08:40.872336 kernel: rtc_cmos 00:04: registered as rtc0 Aug 19 08:08:40.872464 kernel: rtc_cmos 00:04: setting system clock to 2025-08-19T08:08:40 UTC (1755590920) Aug 19 08:08:40.872585 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Aug 19 08:08:40.872596 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 19 08:08:40.872605 kernel: efifb: probing for efifb Aug 19 08:08:40.872613 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 19 08:08:40.872622 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 19 08:08:40.872630 kernel: efifb: scrolling: redraw Aug 19 08:08:40.872639 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 19 08:08:40.872647 kernel: Console: switching to colour frame buffer device 160x50 Aug 19 08:08:40.872659 kernel: fb0: EFI VGA frame buffer device Aug 19 08:08:40.872667 kernel: pstore: Using crash dump compression: deflate Aug 19 08:08:40.872676 kernel: pstore: Registered efi_pstore as persistent store backend Aug 19 08:08:40.872687 kernel: NET: Registered PF_INET6 protocol family Aug 19 08:08:40.872695 kernel: Segment Routing with IPv6 Aug 19 08:08:40.872703 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 08:08:40.872714 kernel: NET: Registered PF_PACKET protocol family Aug 19 08:08:40.872722 kernel: Key type dns_resolver registered Aug 19 08:08:40.872731 kernel: IPI shorthand broadcast: enabled Aug 19 08:08:40.872740 kernel: sched_clock: Marking stable (3592003336, 138155561)->(3747601053, -17442156) Aug 19 08:08:40.872748 kernel: registered taskstats version 1 Aug 19 08:08:40.872756 kernel: Loading compiled-in X.509 certificates Aug 19 08:08:40.872765 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: 93a065b103c00d4b81cc5822e4e7f9674e63afaf' Aug 19 08:08:40.872777 kernel: Demotion targets for Node 0: null Aug 19 08:08:40.872785 kernel: Key type .fscrypt registered Aug 19 08:08:40.872796 kernel: Key type fscrypt-provisioning registered Aug 19 08:08:40.872805 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 08:08:40.872813 kernel: ima: Allocated hash algorithm: sha1 Aug 19 08:08:40.872822 kernel: ima: No architecture policies found Aug 19 08:08:40.872830 kernel: clk: Disabling unused clocks Aug 19 08:08:40.872839 kernel: Warning: unable to open an initial console. Aug 19 08:08:40.872847 kernel: Freeing unused kernel image (initmem) memory: 54040K Aug 19 08:08:40.872856 kernel: Write protecting the kernel read-only data: 24576k Aug 19 08:08:40.872864 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 19 08:08:40.872875 kernel: Run /init as init process Aug 19 08:08:40.872884 kernel: with arguments: Aug 19 08:08:40.872892 kernel: /init Aug 19 08:08:40.872901 kernel: with environment: Aug 19 08:08:40.872909 kernel: HOME=/ Aug 19 08:08:40.872941 kernel: TERM=linux Aug 19 08:08:40.872950 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 08:08:40.872963 systemd[1]: Successfully made /usr/ read-only. Aug 19 08:08:40.872978 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:08:40.872988 systemd[1]: Detected virtualization kvm. Aug 19 08:08:40.872997 systemd[1]: Detected architecture x86-64. Aug 19 08:08:40.873006 systemd[1]: Running in initrd. Aug 19 08:08:40.873014 systemd[1]: No hostname configured, using default hostname. Aug 19 08:08:40.873024 systemd[1]: Hostname set to . Aug 19 08:08:40.873032 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:08:40.873043 systemd[1]: Queued start job for default target initrd.target. Aug 19 08:08:40.873053 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:08:40.873062 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:08:40.873071 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 08:08:40.873081 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:08:40.873090 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 08:08:40.873099 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 08:08:40.873112 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 08:08:40.873121 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 08:08:40.873132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:08:40.873141 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:08:40.873150 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:08:40.873159 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:08:40.873168 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:08:40.873177 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:08:40.873186 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:08:40.873197 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:08:40.873209 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 08:08:40.873218 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 08:08:40.873227 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:08:40.873236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:08:40.873245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:08:40.873254 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:08:40.873263 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 08:08:40.873274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:08:40.873283 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 08:08:40.873293 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 08:08:40.873302 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 08:08:40.873316 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:08:40.873341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:08:40.873360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:40.873380 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 08:08:40.874182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:08:40.874197 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 08:08:40.874210 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:08:40.874339 systemd-journald[220]: Collecting audit messages is disabled. Aug 19 08:08:40.874374 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:08:40.874385 systemd-journald[220]: Journal started Aug 19 08:08:40.874410 systemd-journald[220]: Runtime Journal (/run/log/journal/394bd5ef9de5487e94e4ad652c643ffb) is 6M, max 48.2M, 42.2M free. Aug 19 08:08:40.848239 systemd-modules-load[221]: Inserted module 'overlay' Aug 19 08:08:40.936945 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:08:40.939530 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:40.944686 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 08:08:40.945821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:08:40.950493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:08:40.954948 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 08:08:40.957170 systemd-modules-load[221]: Inserted module 'br_netfilter' Aug 19 08:08:40.958067 kernel: Bridge firewalling registered Aug 19 08:08:40.959093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:08:40.962833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:08:40.965012 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 08:08:40.967154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:08:40.970542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:08:40.973613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:08:40.977733 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 08:08:40.979000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:08:40.993645 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:08:41.008539 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:08:41.047081 systemd-resolved[262]: Positive Trust Anchors: Aug 19 08:08:41.047105 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:08:41.047136 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:08:41.049970 systemd-resolved[262]: Defaulting to hostname 'linux'. Aug 19 08:08:41.055654 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:08:41.056792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:08:41.118958 kernel: SCSI subsystem initialized Aug 19 08:08:41.127951 kernel: Loading iSCSI transport class v2.0-870. Aug 19 08:08:41.139954 kernel: iscsi: registered transport (tcp) Aug 19 08:08:41.163958 kernel: iscsi: registered transport (qla4xxx) Aug 19 08:08:41.164009 kernel: QLogic iSCSI HBA Driver Aug 19 08:08:41.187167 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:08:41.211734 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:08:41.214231 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:08:41.276349 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 08:08:41.278902 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 08:08:41.339949 kernel: raid6: avx2x4 gen() 30562 MB/s Aug 19 08:08:41.356948 kernel: raid6: avx2x2 gen() 31270 MB/s Aug 19 08:08:41.373979 kernel: raid6: avx2x1 gen() 25880 MB/s Aug 19 08:08:41.373998 kernel: raid6: using algorithm avx2x2 gen() 31270 MB/s Aug 19 08:08:41.391980 kernel: raid6: .... xor() 19898 MB/s, rmw enabled Aug 19 08:08:41.392012 kernel: raid6: using avx2x2 recovery algorithm Aug 19 08:08:41.413946 kernel: xor: automatically using best checksumming function avx Aug 19 08:08:41.614955 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 08:08:41.624631 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:08:41.626579 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:08:41.656848 systemd-udevd[473]: Using default interface naming scheme 'v255'. Aug 19 08:08:41.663329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:08:41.664271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 08:08:41.689056 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Aug 19 08:08:41.718138 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:08:41.719705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:08:41.822999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:08:41.825348 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 08:08:41.858969 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 19 08:08:41.862077 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 19 08:08:41.871226 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 08:08:41.871251 kernel: GPT:9289727 != 19775487 Aug 19 08:08:41.871262 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 08:08:41.871272 kernel: GPT:9289727 != 19775487 Aug 19 08:08:41.871282 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 08:08:41.871292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:08:41.881951 kernel: libata version 3.00 loaded. Aug 19 08:08:41.881976 kernel: cryptd: max_cpu_qlen set to 1000 Aug 19 08:08:41.899035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:08:41.899108 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:41.907183 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:41.911114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:41.990623 kernel: AES CTR mode by8 optimization enabled Aug 19 08:08:41.990648 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 19 08:08:41.990660 kernel: ahci 0000:00:1f.2: version 3.0 Aug 19 08:08:41.990852 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 19 08:08:41.990869 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 19 08:08:41.993075 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 19 08:08:41.993222 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 19 08:08:41.993361 kernel: scsi host0: ahci Aug 19 08:08:41.993540 kernel: scsi host1: ahci Aug 19 08:08:41.993702 kernel: scsi host2: ahci Aug 19 08:08:41.993858 kernel: scsi host3: ahci Aug 19 08:08:41.962928 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:08:41.981627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:08:41.981752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:41.998019 kernel: scsi host4: ahci Aug 19 08:08:41.983243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:42.000530 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 08:08:42.007538 kernel: scsi host5: ahci Aug 19 08:08:42.007724 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Aug 19 08:08:42.007736 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Aug 19 08:08:42.007746 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Aug 19 08:08:42.007757 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Aug 19 08:08:42.007767 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Aug 19 08:08:42.007778 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Aug 19 08:08:42.017030 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 08:08:42.030876 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:08:42.031215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:42.047327 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 08:08:42.047416 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 08:08:42.053484 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 08:08:42.231457 disk-uuid[638]: Primary Header is updated. Aug 19 08:08:42.231457 disk-uuid[638]: Secondary Entries is updated. Aug 19 08:08:42.231457 disk-uuid[638]: Secondary Header is updated. Aug 19 08:08:42.235960 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:08:42.239943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:08:42.313174 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 19 08:08:42.313230 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:42.313242 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:42.313944 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 19 08:08:42.314989 kernel: ata3.00: applying bridge limits Aug 19 08:08:42.315946 kernel: ata3.00: configured for UDMA/100 Aug 19 08:08:42.321946 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:42.321975 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:42.322948 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 19 08:08:42.323946 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:42.389431 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 19 08:08:42.389691 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 19 08:08:42.409949 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 19 08:08:42.823721 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 08:08:42.825393 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:08:42.826986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:08:42.828143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:08:42.831229 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 08:08:42.861737 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:08:43.242999 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:08:43.243056 disk-uuid[639]: The operation has completed successfully. Aug 19 08:08:43.273783 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 08:08:43.273946 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 08:08:43.308446 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 08:08:43.333433 sh[667]: Success Aug 19 08:08:43.353327 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 08:08:43.353387 kernel: device-mapper: uevent: version 1.0.3 Aug 19 08:08:43.354426 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 08:08:43.363948 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 19 08:08:43.403394 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 08:08:43.406692 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 08:08:43.424042 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 08:08:43.431580 kernel: BTRFS: device fsid 99050df3-5e04-4f37-acde-dec46aab7896 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (679) Aug 19 08:08:43.431610 kernel: BTRFS info (device dm-0): first mount of filesystem 99050df3-5e04-4f37-acde-dec46aab7896 Aug 19 08:08:43.431621 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:08:43.433021 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 08:08:43.437249 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 08:08:43.438611 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:08:43.440034 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 08:08:43.440760 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 08:08:43.442547 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 08:08:43.471944 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Aug 19 08:08:43.471974 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:43.473697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:08:43.473712 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:08:43.481950 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:43.482762 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 08:08:43.486158 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 08:08:43.655700 ignition[753]: Ignition 2.21.0 Aug 19 08:08:43.655714 ignition[753]: Stage: fetch-offline Aug 19 08:08:43.655747 ignition[753]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:43.655756 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:43.655865 ignition[753]: parsed url from cmdline: "" Aug 19 08:08:43.655869 ignition[753]: no config URL provided Aug 19 08:08:43.655876 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:08:43.655886 ignition[753]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:08:43.655911 ignition[753]: op(1): [started] loading QEMU firmware config module Aug 19 08:08:43.655931 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 19 08:08:43.665893 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:08:43.668689 ignition[753]: op(1): [finished] loading QEMU firmware config module Aug 19 08:08:43.672035 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:08:43.711389 ignition[753]: parsing config with SHA512: 5320effc70ef18e59d81fd9c6e6e1307b36285e851f7fda142385cf86b1764963dc62ab22e594521608a13d2f172ed9fdaadd6adb1dcd4a33bbd1b96f190ff88 Aug 19 08:08:43.739262 unknown[753]: fetched base config from "system" Aug 19 08:08:43.739276 unknown[753]: fetched user config from "qemu" Aug 19 08:08:43.739623 ignition[753]: fetch-offline: fetch-offline passed Aug 19 08:08:43.739682 ignition[753]: Ignition finished successfully Aug 19 08:08:43.742911 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:08:43.772597 systemd-networkd[858]: lo: Link UP Aug 19 08:08:43.772608 systemd-networkd[858]: lo: Gained carrier Aug 19 08:08:43.774299 systemd-networkd[858]: Enumeration completed Aug 19 08:08:43.774380 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:08:43.774702 systemd-networkd[858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:08:43.774707 systemd-networkd[858]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:08:43.775748 systemd-networkd[858]: eth0: Link UP Aug 19 08:08:43.775912 systemd-networkd[858]: eth0: Gained carrier Aug 19 08:08:43.776008 systemd-networkd[858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:08:43.777770 systemd[1]: Reached target network.target - Network. Aug 19 08:08:43.779887 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 19 08:08:43.780736 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 08:08:43.795972 systemd-networkd[858]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 08:08:43.826389 ignition[862]: Ignition 2.21.0 Aug 19 08:08:43.826402 ignition[862]: Stage: kargs Aug 19 08:08:43.826541 ignition[862]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:43.826552 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:43.827991 ignition[862]: kargs: kargs passed Aug 19 08:08:43.828041 ignition[862]: Ignition finished successfully Aug 19 08:08:43.832424 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 08:08:43.835443 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 08:08:43.870515 ignition[872]: Ignition 2.21.0 Aug 19 08:08:43.870527 ignition[872]: Stage: disks Aug 19 08:08:43.870706 ignition[872]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:43.870718 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:43.875668 ignition[872]: disks: disks passed Aug 19 08:08:43.875779 ignition[872]: Ignition finished successfully Aug 19 08:08:43.878739 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 08:08:43.880804 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 08:08:43.880907 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 08:08:43.882906 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:08:43.885101 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:08:43.885411 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:08:43.890393 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 08:08:43.931148 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 08:08:43.938634 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 08:08:43.942691 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 08:08:44.055947 kernel: EXT4-fs (vda9): mounted filesystem 41966107-04fa-426e-9830-6b4efa50e27b r/w with ordered data mode. Quota mode: none. Aug 19 08:08:44.056599 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 08:08:44.058094 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 08:08:44.060467 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:08:44.062068 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 08:08:44.063120 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 08:08:44.063160 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 08:08:44.063183 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:08:44.072052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 08:08:44.074495 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 08:08:44.080133 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Aug 19 08:08:44.080156 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:44.080167 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:08:44.080184 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:08:44.083635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:08:44.123075 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 08:08:44.127772 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Aug 19 08:08:44.132381 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 08:08:44.137451 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 08:08:44.226896 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 08:08:44.229039 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 08:08:44.230549 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 08:08:44.253986 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:44.265797 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 08:08:44.279644 ignition[1004]: INFO : Ignition 2.21.0 Aug 19 08:08:44.279644 ignition[1004]: INFO : Stage: mount Aug 19 08:08:44.281370 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:44.281370 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:44.283438 ignition[1004]: INFO : mount: mount passed Aug 19 08:08:44.283438 ignition[1004]: INFO : Ignition finished successfully Aug 19 08:08:44.287284 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 08:08:44.289318 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 08:08:44.430425 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 08:08:44.431908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:08:44.466210 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1016) Aug 19 08:08:44.466241 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:44.466252 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:08:44.467039 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:08:44.470791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:08:44.499340 ignition[1033]: INFO : Ignition 2.21.0 Aug 19 08:08:44.499340 ignition[1033]: INFO : Stage: files Aug 19 08:08:44.501290 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:44.501290 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:44.501290 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Aug 19 08:08:44.504681 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 08:08:44.504681 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 08:08:44.504681 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 08:08:44.509049 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 08:08:44.509049 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 08:08:44.509049 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 19 08:08:44.509049 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 19 08:08:44.505260 unknown[1033]: wrote ssh authorized keys file for user: core Aug 19 08:08:44.689448 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 08:08:44.860778 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 19 08:08:44.860778 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 19 08:08:44.864999 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 08:08:44.864999 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:08:44.864999 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:08:44.864999 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:08:44.864999 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:08:44.864999 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:08:44.864999 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:08:44.878117 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:08:44.878117 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:08:44.878117 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 19 08:08:44.878117 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 19 08:08:44.878117 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 19 08:08:44.878117 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 19 08:08:45.144190 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 19 08:08:45.266124 systemd-networkd[858]: eth0: Gained IPv6LL Aug 19 08:08:45.422846 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 19 08:08:45.422846 ignition[1033]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 19 08:08:45.426686 ignition[1033]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:08:45.483083 ignition[1033]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:08:45.483083 ignition[1033]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 19 08:08:45.483083 ignition[1033]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 19 08:08:45.487278 ignition[1033]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 08:08:45.489073 ignition[1033]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 08:08:45.489073 ignition[1033]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 19 08:08:45.489073 ignition[1033]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 19 08:08:45.509087 ignition[1033]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 08:08:45.515239 ignition[1033]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 08:08:45.517146 ignition[1033]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 19 08:08:45.517146 ignition[1033]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 19 08:08:45.517146 ignition[1033]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 08:08:45.517146 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:08:45.517146 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:08:45.517146 ignition[1033]: INFO : files: files passed Aug 19 08:08:45.517146 ignition[1033]: INFO : Ignition finished successfully Aug 19 08:08:45.526404 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 08:08:45.529677 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 08:08:45.532703 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 08:08:45.547495 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 08:08:45.547653 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 08:08:45.551167 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Aug 19 08:08:45.552810 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:08:45.552810 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:08:45.556253 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:08:45.554302 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:08:45.558579 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 08:08:45.560651 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 08:08:45.671129 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 08:08:45.671289 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 08:08:45.672027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 08:08:45.675714 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 08:08:45.676094 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 08:08:45.678792 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 08:08:45.729344 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:08:45.733187 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 08:08:45.765125 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:08:45.766345 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:08:45.766632 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 08:08:45.766966 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 08:08:45.767071 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:08:45.767760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 08:08:45.768235 systemd[1]: Stopped target basic.target - Basic System. Aug 19 08:08:45.768551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 08:08:45.768879 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:08:45.769358 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 08:08:45.769686 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:08:45.770167 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 08:08:45.770468 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:08:45.770803 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 08:08:45.771272 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 08:08:45.771576 systemd[1]: Stopped target swap.target - Swaps. Aug 19 08:08:45.771879 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 08:08:45.772021 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:08:45.772681 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:08:45.773177 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:08:45.773457 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 08:08:45.773786 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:08:45.800586 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 08:08:45.800698 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 08:08:45.803721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 08:08:45.803858 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:08:45.804800 systemd[1]: Stopped target paths.target - Path Units. Aug 19 08:08:45.805212 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 08:08:45.810056 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:08:45.811604 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 08:08:45.812770 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 08:08:45.813267 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 08:08:45.813412 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:08:45.816470 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 08:08:45.816579 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:08:45.818167 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 08:08:45.818328 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:08:45.819803 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 08:08:45.819959 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 08:08:45.824233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 08:08:45.826052 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 08:08:45.826197 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:08:45.829311 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 08:08:45.829427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 08:08:45.829590 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:08:45.830059 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 08:08:45.830224 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:08:45.835681 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 08:08:45.835838 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 08:08:45.851392 ignition[1088]: INFO : Ignition 2.21.0 Aug 19 08:08:45.851392 ignition[1088]: INFO : Stage: umount Aug 19 08:08:45.853227 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:45.853227 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:45.855509 ignition[1088]: INFO : umount: umount passed Aug 19 08:08:45.855509 ignition[1088]: INFO : Ignition finished successfully Aug 19 08:08:45.857538 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 08:08:45.857679 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 08:08:45.861631 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 08:08:45.862118 systemd[1]: Stopped target network.target - Network. Aug 19 08:08:45.863481 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 08:08:45.863565 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 08:08:45.865540 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 08:08:45.865604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 08:08:45.867733 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 08:08:45.867909 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 08:08:45.868868 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 08:08:45.868955 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 08:08:45.869673 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 08:08:45.872676 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 08:08:45.882752 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 08:08:45.882995 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 08:08:45.887503 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 08:08:45.887945 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 08:08:45.888005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:08:45.893299 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:08:45.893599 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 08:08:45.893788 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 08:08:45.897888 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 08:08:45.898391 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 08:08:45.901295 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 08:08:45.901347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:08:45.902797 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 08:08:45.904376 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 08:08:45.904429 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:08:45.904999 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:08:45.905059 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:08:45.910411 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 08:08:45.910460 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 08:08:45.911502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:08:45.912863 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 08:08:45.931728 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 08:08:45.938161 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:08:45.940907 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 08:08:45.940980 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 08:08:45.943041 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 08:08:45.943116 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:08:45.945204 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 08:08:45.945282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:08:45.951626 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 08:08:45.951694 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 08:08:45.954482 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 08:08:45.954544 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:08:45.958607 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 08:08:45.959645 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 08:08:45.959699 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:08:45.964571 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 08:08:45.964644 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:08:45.968146 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:08:45.968229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:45.972063 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 08:08:45.972226 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 08:08:45.976020 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 08:08:45.976141 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 08:08:46.016272 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 08:08:46.016458 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 08:08:46.019742 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 08:08:46.021683 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 08:08:46.021794 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 08:08:46.024982 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 08:08:46.049518 systemd[1]: Switching root. Aug 19 08:08:46.090885 systemd-journald[220]: Journal stopped Aug 19 08:08:47.284744 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Aug 19 08:08:47.284807 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 08:08:47.284822 kernel: SELinux: policy capability open_perms=1 Aug 19 08:08:47.284837 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 08:08:47.284848 kernel: SELinux: policy capability always_check_network=0 Aug 19 08:08:47.284860 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 08:08:47.284877 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 08:08:47.284888 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 08:08:47.284904 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 08:08:47.284930 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 08:08:47.284942 kernel: audit: type=1403 audit(1755590926.467:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 08:08:47.284962 systemd[1]: Successfully loaded SELinux policy in 62.822ms. Aug 19 08:08:47.284990 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.423ms. Aug 19 08:08:47.285003 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:08:47.285016 systemd[1]: Detected virtualization kvm. Aug 19 08:08:47.285028 systemd[1]: Detected architecture x86-64. Aug 19 08:08:47.285040 systemd[1]: Detected first boot. Aug 19 08:08:47.285169 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:08:47.285182 zram_generator::config[1135]: No configuration found. Aug 19 08:08:47.285195 kernel: Guest personality initialized and is inactive Aug 19 08:08:47.285209 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 19 08:08:47.285221 kernel: Initialized host personality Aug 19 08:08:47.285232 kernel: NET: Registered PF_VSOCK protocol family Aug 19 08:08:47.285244 systemd[1]: Populated /etc with preset unit settings. Aug 19 08:08:47.285257 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 08:08:47.285269 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 08:08:47.285282 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 08:08:47.285294 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 08:08:47.285306 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 08:08:47.285326 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 08:08:47.285338 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 08:08:47.285351 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 08:08:47.285364 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 08:08:47.285376 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 08:08:47.285388 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 08:08:47.285400 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 08:08:47.285413 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:08:47.285428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:08:47.285713 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 08:08:47.285726 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 08:08:47.285746 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 08:08:47.285759 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:08:47.285771 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 08:08:47.285784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:08:47.285796 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:08:47.285811 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 08:08:47.285823 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 08:08:47.285836 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 08:08:47.285848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 08:08:47.285860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:08:47.285872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:08:47.285884 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:08:47.285896 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:08:47.285908 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 08:08:47.285945 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 08:08:47.285957 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 08:08:47.285969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:08:47.285982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:08:47.285993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:08:47.286005 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 08:08:47.286018 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 08:08:47.286030 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 08:08:47.286043 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 08:08:47.286058 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:47.286071 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 08:08:47.286083 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 08:08:47.286095 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 08:08:47.286108 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 08:08:47.286120 systemd[1]: Reached target machines.target - Containers. Aug 19 08:08:47.286132 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 08:08:47.286144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:08:47.286168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:08:47.286180 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 08:08:47.286192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:08:47.286205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:08:47.286217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:08:47.286229 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 08:08:47.286241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:08:47.286254 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 08:08:47.286267 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 08:08:47.286282 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 08:08:47.286295 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 08:08:47.286309 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 08:08:47.286324 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:08:47.286336 kernel: loop: module loaded Aug 19 08:08:47.286348 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:08:47.286360 kernel: fuse: init (API version 7.41) Aug 19 08:08:47.286371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:08:47.286384 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:08:47.286398 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 08:08:47.286411 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 08:08:47.286423 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:08:47.286442 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 08:08:47.286457 systemd[1]: Stopped verity-setup.service. Aug 19 08:08:47.286470 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:47.286482 kernel: ACPI: bus type drm_connector registered Aug 19 08:08:47.286494 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 08:08:47.286506 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 08:08:47.286519 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 08:08:47.286556 systemd-journald[1206]: Collecting audit messages is disabled. Aug 19 08:08:47.286579 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 08:08:47.286591 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 08:08:47.286607 systemd-journald[1206]: Journal started Aug 19 08:08:47.286629 systemd-journald[1206]: Runtime Journal (/run/log/journal/394bd5ef9de5487e94e4ad652c643ffb) is 6M, max 48.2M, 42.2M free. Aug 19 08:08:47.016129 systemd[1]: Queued start job for default target multi-user.target. Aug 19 08:08:47.040253 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 08:08:47.040723 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 08:08:47.288959 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:08:47.290527 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 08:08:47.292140 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 08:08:47.293864 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:08:47.295621 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 08:08:47.295857 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 08:08:47.297547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:08:47.297778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:08:47.299418 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:08:47.299641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:08:47.301230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:08:47.301454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:08:47.303214 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 08:08:47.303439 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 08:08:47.305045 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:08:47.305266 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:08:47.306894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:08:47.308587 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:08:47.310430 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 08:08:47.312283 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 08:08:47.327614 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:08:47.330561 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 08:08:47.334054 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 08:08:47.335814 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 08:08:47.335849 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:08:47.338043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 08:08:47.342105 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 08:08:47.343828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:08:47.345409 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 08:08:47.349149 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 08:08:47.351435 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:08:47.354713 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 08:08:47.356115 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:08:47.359200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:08:47.364423 systemd-journald[1206]: Time spent on flushing to /var/log/journal/394bd5ef9de5487e94e4ad652c643ffb is 22.451ms for 1031 entries. Aug 19 08:08:47.364423 systemd-journald[1206]: System Journal (/var/log/journal/394bd5ef9de5487e94e4ad652c643ffb) is 8M, max 195.6M, 187.6M free. Aug 19 08:08:47.400874 systemd-journald[1206]: Received client request to flush runtime journal. Aug 19 08:08:47.423086 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 08:08:47.427047 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 08:08:47.435876 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:08:47.439645 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 08:08:47.441438 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 08:08:47.444220 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 08:08:47.446753 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 08:08:47.450960 kernel: loop0: detected capacity change from 0 to 128016 Aug 19 08:08:47.456701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:08:47.461882 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 08:08:47.464676 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 08:08:47.469941 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 08:08:47.485798 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 08:08:47.489775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:08:47.492955 kernel: loop1: detected capacity change from 0 to 111000 Aug 19 08:08:47.503155 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 08:08:47.523289 kernel: loop2: detected capacity change from 0 to 224512 Aug 19 08:08:47.531835 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Aug 19 08:08:47.531855 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Aug 19 08:08:47.538533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:08:47.570956 kernel: loop3: detected capacity change from 0 to 128016 Aug 19 08:08:47.582954 kernel: loop4: detected capacity change from 0 to 111000 Aug 19 08:08:47.594964 kernel: loop5: detected capacity change from 0 to 224512 Aug 19 08:08:47.608322 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 19 08:08:47.609330 (sd-merge)[1280]: Merged extensions into '/usr'. Aug 19 08:08:47.614801 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 08:08:47.614822 systemd[1]: Reloading... Aug 19 08:08:47.708971 zram_generator::config[1308]: No configuration found. Aug 19 08:08:47.861108 ldconfig[1249]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 08:08:47.941247 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 08:08:47.942586 systemd[1]: Reloading finished in 327 ms. Aug 19 08:08:48.006397 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 08:08:48.008479 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 08:08:48.031701 systemd[1]: Starting ensure-sysext.service... Aug 19 08:08:48.033751 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:08:48.051459 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Aug 19 08:08:48.051565 systemd[1]: Reloading... Aug 19 08:08:48.061065 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 08:08:48.061108 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 08:08:48.061417 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 08:08:48.061699 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 08:08:48.063268 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 08:08:48.063652 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Aug 19 08:08:48.063801 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Aug 19 08:08:48.068329 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:08:48.068405 systemd-tmpfiles[1344]: Skipping /boot Aug 19 08:08:48.080546 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:08:48.080562 systemd-tmpfiles[1344]: Skipping /boot Aug 19 08:08:48.104944 zram_generator::config[1367]: No configuration found. Aug 19 08:08:48.346903 systemd[1]: Reloading finished in 294 ms. Aug 19 08:08:48.371662 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 08:08:48.391280 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:08:48.400859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:08:48.403557 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 08:08:48.406079 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 08:08:48.412154 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:08:48.414941 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:08:48.418694 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 08:08:48.422259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:48.422436 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:08:48.423783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:08:48.427979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:08:48.434493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:08:48.435663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:08:48.435791 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:08:48.435884 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:48.437167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:08:48.437400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:08:48.439369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:08:48.439829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:08:48.443681 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:08:48.444294 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:08:48.446235 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 08:08:48.460538 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 08:08:48.464556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:48.464875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:08:48.467021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:08:48.467433 systemd-udevd[1414]: Using default interface naming scheme 'v255'. Aug 19 08:08:48.470165 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:08:48.473636 augenrules[1445]: No rules Aug 19 08:08:48.477428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:08:48.479999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:08:48.481253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:08:48.481442 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:08:48.483743 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 08:08:48.487793 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 08:08:48.489375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:48.491240 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:08:48.491603 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:08:48.493112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:08:48.493356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:08:48.495903 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 08:08:48.497830 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:08:48.498088 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:08:48.499754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:08:48.499988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:08:48.501822 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:08:48.502120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:08:48.504774 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 08:08:48.510671 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:08:48.513130 systemd[1]: Finished ensure-sysext.service. Aug 19 08:08:48.534763 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:08:48.537461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:08:48.537582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:08:48.543506 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 08:08:48.545971 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 08:08:48.584034 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 08:08:48.620321 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 08:08:48.663038 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:08:48.666669 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 08:08:48.677557 kernel: mousedev: PS/2 mouse device common for all mice Aug 19 08:08:48.687946 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 19 08:08:48.694016 kernel: ACPI: button: Power Button [PWRF] Aug 19 08:08:48.697396 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 08:08:48.725990 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 19 08:08:48.726512 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 19 08:08:48.726784 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 19 08:08:48.825722 systemd-networkd[1479]: lo: Link UP Aug 19 08:08:48.825738 systemd-networkd[1479]: lo: Gained carrier Aug 19 08:08:48.827455 systemd-networkd[1479]: Enumeration completed Aug 19 08:08:48.827576 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:08:48.829130 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:08:48.829144 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:08:48.831241 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 08:08:48.832783 systemd-networkd[1479]: eth0: Link UP Aug 19 08:08:48.832969 systemd-networkd[1479]: eth0: Gained carrier Aug 19 08:08:48.832999 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:08:48.835114 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 08:08:48.926257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:48.928970 systemd-networkd[1479]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 08:08:48.954993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:08:48.955256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:48.959180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:48.963883 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 08:08:48.966039 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 08:08:48.969285 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 08:08:50.403309 systemd-timesyncd[1480]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 19 08:08:50.403358 systemd-timesyncd[1480]: Initial clock synchronization to Tue 2025-08-19 08:08:50.402892 UTC. Aug 19 08:08:50.407921 systemd-resolved[1413]: Positive Trust Anchors: Aug 19 08:08:50.411219 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:08:50.411258 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:08:50.416355 systemd-resolved[1413]: Defaulting to hostname 'linux'. Aug 19 08:08:50.420124 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:08:50.424050 systemd[1]: Reached target network.target - Network. Aug 19 08:08:50.425101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:08:50.431089 kernel: kvm_amd: TSC scaling supported Aug 19 08:08:50.431311 kernel: kvm_amd: Nested Virtualization enabled Aug 19 08:08:50.431328 kernel: kvm_amd: Nested Paging enabled Aug 19 08:08:50.432174 kernel: kvm_amd: LBR virtualization supported Aug 19 08:08:50.432204 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 19 08:08:50.433206 kernel: kvm_amd: Virtual GIF supported Aug 19 08:08:50.471971 kernel: EDAC MC: Ver: 3.0.0 Aug 19 08:08:50.485881 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:50.487323 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:08:50.488470 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 08:08:50.489684 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 08:08:50.490910 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 19 08:08:50.492351 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 08:08:50.493555 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 08:08:50.494771 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 08:08:50.496001 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 08:08:50.496029 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:08:50.496910 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:08:50.498587 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 08:08:50.501328 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 08:08:50.504742 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 08:08:50.506152 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 08:08:50.507378 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 08:08:50.516493 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 08:08:50.517997 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 08:08:50.519912 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 08:08:50.521750 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:08:50.522689 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:08:50.523652 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:08:50.523681 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:08:50.524742 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 08:08:50.526759 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 08:08:50.528622 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 08:08:50.532013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 08:08:50.534059 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 08:08:50.535130 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 08:08:50.537094 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 19 08:08:50.539354 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 08:08:50.540574 jq[1545]: false Aug 19 08:08:50.542087 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 08:08:50.544761 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 08:08:50.547728 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 08:08:50.552310 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing passwd entry cache Aug 19 08:08:50.552570 oslogin_cache_refresh[1547]: Refreshing passwd entry cache Aug 19 08:08:50.554462 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 08:08:50.556600 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 08:08:50.556870 extend-filesystems[1546]: Found /dev/vda6 Aug 19 08:08:50.560798 extend-filesystems[1546]: Found /dev/vda9 Aug 19 08:08:50.563574 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting users, quitting Aug 19 08:08:50.563574 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:08:50.563558 oslogin_cache_refresh[1547]: Failure getting users, quitting Aug 19 08:08:50.563687 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing group entry cache Aug 19 08:08:50.563577 oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:08:50.563629 oslogin_cache_refresh[1547]: Refreshing group entry cache Aug 19 08:08:50.563796 extend-filesystems[1546]: Checking size of /dev/vda9 Aug 19 08:08:50.564818 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 08:08:50.565526 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 08:08:50.568622 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 08:08:50.569212 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting groups, quitting Aug 19 08:08:50.569212 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:08:50.569203 oslogin_cache_refresh[1547]: Failure getting groups, quitting Aug 19 08:08:50.569214 oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:08:50.572496 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 08:08:50.575865 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 08:08:50.576245 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 08:08:50.576911 jq[1566]: true Aug 19 08:08:50.576586 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 19 08:08:50.577193 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 19 08:08:50.578784 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 08:08:50.579089 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 08:08:50.579868 extend-filesystems[1546]: Resized partition /dev/vda9 Aug 19 08:08:50.582533 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 08:08:50.582802 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 08:08:50.589836 extend-filesystems[1575]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 08:08:50.602689 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 08:08:50.606654 update_engine[1563]: I20250819 08:08:50.606169 1563 main.cc:92] Flatcar Update Engine starting Aug 19 08:08:50.607996 jq[1574]: true Aug 19 08:08:50.614606 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 19 08:08:50.623434 tar[1573]: linux-amd64/LICENSE Aug 19 08:08:50.623434 tar[1573]: linux-amd64/helm Aug 19 08:08:50.734982 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 19 08:08:50.744210 dbus-daemon[1543]: [system] SELinux support is enabled Aug 19 08:08:50.744407 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 08:08:50.749146 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 08:08:50.763245 update_engine[1563]: I20250819 08:08:50.756924 1563 update_check_scheduler.cc:74] Next update check in 8m42s Aug 19 08:08:50.749176 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 08:08:50.750511 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 08:08:50.750531 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 08:08:50.759057 systemd[1]: Started update-engine.service - Update Engine. Aug 19 08:08:50.763521 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 08:08:50.767091 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 08:08:50.767091 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 08:08:50.767091 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 19 08:08:50.771135 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Aug 19 08:08:50.770691 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 08:08:50.771162 systemd-logind[1555]: Watching system buttons on /dev/input/event2 (Power Button) Aug 19 08:08:50.771182 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 19 08:08:50.771524 systemd-logind[1555]: New seat seat0. Aug 19 08:08:50.774145 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 08:08:50.777955 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 08:08:50.786657 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:08:50.788006 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 08:08:50.791518 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 08:08:50.875884 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 08:08:50.877701 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 08:08:50.901972 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 08:08:50.905321 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 08:08:50.925004 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 08:08:50.925272 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 08:08:50.928403 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 08:08:51.025126 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 08:08:51.028563 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 08:08:51.033190 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 08:08:51.034679 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 08:08:51.148264 containerd[1576]: time="2025-08-19T08:08:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 08:08:51.149498 containerd[1576]: time="2025-08-19T08:08:51.149423722Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 08:08:51.160922 containerd[1576]: time="2025-08-19T08:08:51.160858358Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.278µs" Aug 19 08:08:51.160922 containerd[1576]: time="2025-08-19T08:08:51.160903122Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 08:08:51.160922 containerd[1576]: time="2025-08-19T08:08:51.160924342Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 08:08:51.161209 containerd[1576]: time="2025-08-19T08:08:51.161157639Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 08:08:51.161209 containerd[1576]: time="2025-08-19T08:08:51.161187745Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 08:08:51.161209 containerd[1576]: time="2025-08-19T08:08:51.161224805Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:08:51.161366 containerd[1576]: time="2025-08-19T08:08:51.161318200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:08:51.161366 containerd[1576]: time="2025-08-19T08:08:51.161330824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:08:51.161679 containerd[1576]: time="2025-08-19T08:08:51.161643870Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:08:51.161679 containerd[1576]: time="2025-08-19T08:08:51.161663678Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:08:51.161679 containerd[1576]: time="2025-08-19T08:08:51.161675490Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:08:51.161755 containerd[1576]: time="2025-08-19T08:08:51.161693233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 08:08:51.161829 containerd[1576]: time="2025-08-19T08:08:51.161800073Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 08:08:51.162137 containerd[1576]: time="2025-08-19T08:08:51.162105666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:08:51.162174 containerd[1576]: time="2025-08-19T08:08:51.162145932Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:08:51.162174 containerd[1576]: time="2025-08-19T08:08:51.162157072Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 08:08:51.162226 containerd[1576]: time="2025-08-19T08:08:51.162204361Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 08:08:51.164202 containerd[1576]: time="2025-08-19T08:08:51.164171298Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 08:08:51.164280 containerd[1576]: time="2025-08-19T08:08:51.164260055Z" level=info msg="metadata content store policy set" policy=shared Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171607148Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171699972Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171718687Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171733264Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171748172Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171759063Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171772869Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171788348Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171801122Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171813074Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171931135Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.171978494Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.172176265Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 08:08:51.172974 containerd[1576]: time="2025-08-19T08:08:51.172202274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172217071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172229835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172244483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172255814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172267736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172279128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172308853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172323842Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172335553Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172440811Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172457131Z" level=info msg="Start snapshots syncer" Aug 19 08:08:51.173326 containerd[1576]: time="2025-08-19T08:08:51.172496796Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 08:08:51.173550 containerd[1576]: time="2025-08-19T08:08:51.172771260Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 08:08:51.173550 containerd[1576]: time="2025-08-19T08:08:51.172843125Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 08:08:51.173770 containerd[1576]: time="2025-08-19T08:08:51.173747741Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 08:08:51.174107 containerd[1576]: time="2025-08-19T08:08:51.174087698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 08:08:51.174197 containerd[1576]: time="2025-08-19T08:08:51.174178528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 08:08:51.174286 containerd[1576]: time="2025-08-19T08:08:51.174248199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 08:08:51.174286 containerd[1576]: time="2025-08-19T08:08:51.174270090Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 08:08:51.174286 containerd[1576]: time="2025-08-19T08:08:51.174293554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 08:08:51.174286 containerd[1576]: time="2025-08-19T08:08:51.174305937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174318892Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174348487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174359628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174370679Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174418979Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174432875Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174441652Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174450298Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:08:51.174453 containerd[1576]: time="2025-08-19T08:08:51.174457972Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 08:08:51.174647 containerd[1576]: time="2025-08-19T08:08:51.174467660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 08:08:51.174647 containerd[1576]: time="2025-08-19T08:08:51.174479903Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 08:08:51.174647 containerd[1576]: time="2025-08-19T08:08:51.174503788Z" level=info msg="runtime interface created" Aug 19 08:08:51.174647 containerd[1576]: time="2025-08-19T08:08:51.174509749Z" level=info msg="created NRI interface" Aug 19 08:08:51.174647 containerd[1576]: time="2025-08-19T08:08:51.174518856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 08:08:51.174647 containerd[1576]: time="2025-08-19T08:08:51.174531079Z" level=info msg="Connect containerd service" Aug 19 08:08:51.174647 containerd[1576]: time="2025-08-19T08:08:51.174567227Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 08:08:51.175701 containerd[1576]: time="2025-08-19T08:08:51.175657571Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:08:51.337372 tar[1573]: linux-amd64/README.md Aug 19 08:08:51.360750 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 08:08:51.385557 containerd[1576]: time="2025-08-19T08:08:51.385485423Z" level=info msg="Start subscribing containerd event" Aug 19 08:08:51.385683 containerd[1576]: time="2025-08-19T08:08:51.385597203Z" level=info msg="Start recovering state" Aug 19 08:08:51.385848 containerd[1576]: time="2025-08-19T08:08:51.385800083Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 08:08:51.385875 containerd[1576]: time="2025-08-19T08:08:51.385822405Z" level=info msg="Start event monitor" Aug 19 08:08:51.385896 containerd[1576]: time="2025-08-19T08:08:51.385880163Z" level=info msg="Start cni network conf syncer for default" Aug 19 08:08:51.385931 containerd[1576]: time="2025-08-19T08:08:51.385911702Z" level=info msg="Start streaming server" Aug 19 08:08:51.386002 containerd[1576]: time="2025-08-19T08:08:51.385934856Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 08:08:51.386002 containerd[1576]: time="2025-08-19T08:08:51.385960885Z" level=info msg="runtime interface starting up..." Aug 19 08:08:51.386042 containerd[1576]: time="2025-08-19T08:08:51.385884081Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 08:08:51.386063 containerd[1576]: time="2025-08-19T08:08:51.385969471Z" level=info msg="starting plugins..." Aug 19 08:08:51.386112 containerd[1576]: time="2025-08-19T08:08:51.386094145Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 08:08:51.386341 containerd[1576]: time="2025-08-19T08:08:51.386313225Z" level=info msg="containerd successfully booted in 0.238718s" Aug 19 08:08:51.386443 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 08:08:51.883174 systemd-networkd[1479]: eth0: Gained IPv6LL Aug 19 08:08:51.886828 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 08:08:51.888601 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 08:08:51.891214 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 19 08:08:51.893790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:08:51.896012 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 08:08:51.921619 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 08:08:51.930384 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 19 08:08:51.930674 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 19 08:08:51.932324 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 08:08:53.604070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:08:53.605873 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 08:08:53.609279 systemd[1]: Startup finished in 3.654s (kernel) + 5.822s (initrd) + 5.769s (userspace) = 15.246s. Aug 19 08:08:53.620260 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:08:54.091740 kubelet[1677]: E0819 08:08:54.091539 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:08:54.096150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:08:54.096356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:08:54.096821 systemd[1]: kubelet.service: Consumed 1.996s CPU time, 264.5M memory peak. Aug 19 08:08:56.012583 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 08:08:56.014214 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:49050.service - OpenSSH per-connection server daemon (10.0.0.1:49050). Aug 19 08:08:56.090481 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 49050 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:56.092649 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:56.099547 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 08:08:56.100747 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 08:08:56.106914 systemd-logind[1555]: New session 1 of user core. Aug 19 08:08:56.124753 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 08:08:56.127859 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 08:08:56.142414 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 08:08:56.145201 systemd-logind[1555]: New session c1 of user core. Aug 19 08:08:56.283714 systemd[1695]: Queued start job for default target default.target. Aug 19 08:08:56.302193 systemd[1695]: Created slice app.slice - User Application Slice. Aug 19 08:08:56.302218 systemd[1695]: Reached target paths.target - Paths. Aug 19 08:08:56.302259 systemd[1695]: Reached target timers.target - Timers. Aug 19 08:08:56.303864 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 08:08:56.315832 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 08:08:56.315998 systemd[1695]: Reached target sockets.target - Sockets. Aug 19 08:08:56.316041 systemd[1695]: Reached target basic.target - Basic System. Aug 19 08:08:56.316081 systemd[1695]: Reached target default.target - Main User Target. Aug 19 08:08:56.316112 systemd[1695]: Startup finished in 163ms. Aug 19 08:08:56.316502 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 08:08:56.318246 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 08:08:56.379862 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:49060.service - OpenSSH per-connection server daemon (10.0.0.1:49060). Aug 19 08:08:56.446824 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 49060 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:56.448215 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:56.453252 systemd-logind[1555]: New session 2 of user core. Aug 19 08:08:56.461248 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 08:08:56.518369 sshd[1709]: Connection closed by 10.0.0.1 port 49060 Aug 19 08:08:56.518850 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:56.532840 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:49060.service: Deactivated successfully. Aug 19 08:08:56.534808 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 08:08:56.535591 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Aug 19 08:08:56.538588 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:49068.service - OpenSSH per-connection server daemon (10.0.0.1:49068). Aug 19 08:08:56.539161 systemd-logind[1555]: Removed session 2. Aug 19 08:08:56.595362 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 49068 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:56.596553 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:56.601067 systemd-logind[1555]: New session 3 of user core. Aug 19 08:08:56.615078 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 08:08:56.666644 sshd[1718]: Connection closed by 10.0.0.1 port 49068 Aug 19 08:08:56.667046 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:56.683806 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:49068.service: Deactivated successfully. Aug 19 08:08:56.685782 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 08:08:56.686551 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Aug 19 08:08:56.689134 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). Aug 19 08:08:56.689808 systemd-logind[1555]: Removed session 3. Aug 19 08:08:56.737229 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:56.738819 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:56.744180 systemd-logind[1555]: New session 4 of user core. Aug 19 08:08:56.754103 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 08:08:56.809244 sshd[1727]: Connection closed by 10.0.0.1 port 49072 Aug 19 08:08:56.809885 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:56.826686 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:49072.service: Deactivated successfully. Aug 19 08:08:56.828569 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 08:08:56.829303 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Aug 19 08:08:56.832266 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:49078.service - OpenSSH per-connection server daemon (10.0.0.1:49078). Aug 19 08:08:56.832852 systemd-logind[1555]: Removed session 4. Aug 19 08:08:56.884906 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 49078 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:56.886253 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:56.890987 systemd-logind[1555]: New session 5 of user core. Aug 19 08:08:56.905093 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 08:08:56.964220 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 08:08:56.964562 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:08:56.983402 sudo[1737]: pam_unix(sudo:session): session closed for user root Aug 19 08:08:56.985461 sshd[1736]: Connection closed by 10.0.0.1 port 49078 Aug 19 08:08:56.985875 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:57.000899 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:49078.service: Deactivated successfully. Aug 19 08:08:57.002787 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 08:08:57.003588 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Aug 19 08:08:57.006797 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:49082.service - OpenSSH per-connection server daemon (10.0.0.1:49082). Aug 19 08:08:57.007483 systemd-logind[1555]: Removed session 5. Aug 19 08:08:57.058735 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 49082 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:57.060156 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:57.064787 systemd-logind[1555]: New session 6 of user core. Aug 19 08:08:57.082073 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 08:08:57.135643 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 08:08:57.135978 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:08:57.769813 sudo[1748]: pam_unix(sudo:session): session closed for user root Aug 19 08:08:57.777065 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 08:08:57.777389 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:08:57.789050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:08:57.836201 augenrules[1770]: No rules Aug 19 08:08:57.838362 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:08:57.838706 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:08:57.840021 sudo[1747]: pam_unix(sudo:session): session closed for user root Aug 19 08:08:57.841758 sshd[1746]: Connection closed by 10.0.0.1 port 49082 Aug 19 08:08:57.842124 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:57.854045 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:49082.service: Deactivated successfully. Aug 19 08:08:57.856549 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 08:08:57.857420 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Aug 19 08:08:57.860965 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:49088.service - OpenSSH per-connection server daemon (10.0.0.1:49088). Aug 19 08:08:57.861599 systemd-logind[1555]: Removed session 6. Aug 19 08:08:57.924773 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 49088 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:57.926021 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:57.931443 systemd-logind[1555]: New session 7 of user core. Aug 19 08:08:57.942132 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 08:08:57.996732 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 08:08:57.997067 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:08:58.684899 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 08:08:58.702272 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 08:08:59.328824 dockerd[1804]: time="2025-08-19T08:08:59.328731383Z" level=info msg="Starting up" Aug 19 08:08:59.330002 dockerd[1804]: time="2025-08-19T08:08:59.329958944Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 08:08:59.348722 dockerd[1804]: time="2025-08-19T08:08:59.348659461Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 08:08:59.407244 dockerd[1804]: time="2025-08-19T08:08:59.407182120Z" level=info msg="Loading containers: start." Aug 19 08:08:59.417972 kernel: Initializing XFRM netlink socket Aug 19 08:08:59.776915 systemd-networkd[1479]: docker0: Link UP Aug 19 08:08:59.783767 dockerd[1804]: time="2025-08-19T08:08:59.783719603Z" level=info msg="Loading containers: done." Aug 19 08:08:59.804614 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2968121024-merged.mount: Deactivated successfully. Aug 19 08:08:59.966440 dockerd[1804]: time="2025-08-19T08:08:59.966351843Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 08:08:59.966601 dockerd[1804]: time="2025-08-19T08:08:59.966489501Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 08:08:59.966629 dockerd[1804]: time="2025-08-19T08:08:59.966607182Z" level=info msg="Initializing buildkit" Aug 19 08:09:00.103915 dockerd[1804]: time="2025-08-19T08:09:00.103785653Z" level=info msg="Completed buildkit initialization" Aug 19 08:09:00.111456 dockerd[1804]: time="2025-08-19T08:09:00.111399506Z" level=info msg="Daemon has completed initialization" Aug 19 08:09:00.111600 dockerd[1804]: time="2025-08-19T08:09:00.111526053Z" level=info msg="API listen on /run/docker.sock" Aug 19 08:09:00.111696 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 08:09:01.175291 containerd[1576]: time="2025-08-19T08:09:01.175233144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Aug 19 08:09:02.003171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028190317.mount: Deactivated successfully. Aug 19 08:09:03.270216 containerd[1576]: time="2025-08-19T08:09:03.270134245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:03.270753 containerd[1576]: time="2025-08-19T08:09:03.270702861Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Aug 19 08:09:03.271833 containerd[1576]: time="2025-08-19T08:09:03.271805849Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:03.298143 containerd[1576]: time="2025-08-19T08:09:03.298090533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:03.299257 containerd[1576]: time="2025-08-19T08:09:03.299215492Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.12393015s" Aug 19 08:09:03.299327 containerd[1576]: time="2025-08-19T08:09:03.299260016Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Aug 19 08:09:03.300204 containerd[1576]: time="2025-08-19T08:09:03.300165193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Aug 19 08:09:04.346825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 08:09:04.348811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:04.671093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:04.687330 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:09:04.827502 kubelet[2087]: E0819 08:09:04.827333 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:09:04.834279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:09:04.834490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:09:04.834913 systemd[1]: kubelet.service: Consumed 436ms CPU time, 112.9M memory peak. Aug 19 08:09:05.318551 containerd[1576]: time="2025-08-19T08:09:05.318478790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:05.319242 containerd[1576]: time="2025-08-19T08:09:05.319179884Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Aug 19 08:09:05.320508 containerd[1576]: time="2025-08-19T08:09:05.320456317Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:05.322959 containerd[1576]: time="2025-08-19T08:09:05.322902673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:05.323853 containerd[1576]: time="2025-08-19T08:09:05.323800937Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 2.023594647s" Aug 19 08:09:05.323853 containerd[1576]: time="2025-08-19T08:09:05.323839349Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Aug 19 08:09:05.324437 containerd[1576]: time="2025-08-19T08:09:05.324405971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Aug 19 08:09:07.810339 containerd[1576]: time="2025-08-19T08:09:07.810259509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:07.837546 containerd[1576]: time="2025-08-19T08:09:07.837499233Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Aug 19 08:09:07.858815 containerd[1576]: time="2025-08-19T08:09:07.858700949Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:07.984656 containerd[1576]: time="2025-08-19T08:09:07.984598853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:07.985670 containerd[1576]: time="2025-08-19T08:09:07.985619386Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.661173921s" Aug 19 08:09:07.985670 containerd[1576]: time="2025-08-19T08:09:07.985653009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Aug 19 08:09:07.986608 containerd[1576]: time="2025-08-19T08:09:07.986488345Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Aug 19 08:09:09.966263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046424863.mount: Deactivated successfully. Aug 19 08:09:10.428562 containerd[1576]: time="2025-08-19T08:09:10.428416287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:10.429173 containerd[1576]: time="2025-08-19T08:09:10.429088568Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Aug 19 08:09:10.430367 containerd[1576]: time="2025-08-19T08:09:10.430328192Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:10.432089 containerd[1576]: time="2025-08-19T08:09:10.432055109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:10.432724 containerd[1576]: time="2025-08-19T08:09:10.432666796Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.44614072s" Aug 19 08:09:10.432769 containerd[1576]: time="2025-08-19T08:09:10.432720887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Aug 19 08:09:10.433392 containerd[1576]: time="2025-08-19T08:09:10.433348183Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 08:09:11.024217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612838543.mount: Deactivated successfully. Aug 19 08:09:11.866656 containerd[1576]: time="2025-08-19T08:09:11.866584048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:11.867503 containerd[1576]: time="2025-08-19T08:09:11.867438240Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 19 08:09:11.868594 containerd[1576]: time="2025-08-19T08:09:11.868531689Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:11.871115 containerd[1576]: time="2025-08-19T08:09:11.871065019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:11.872005 containerd[1576]: time="2025-08-19T08:09:11.871966699Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.438569905s" Aug 19 08:09:11.872005 containerd[1576]: time="2025-08-19T08:09:11.872005221Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 19 08:09:11.872595 containerd[1576]: time="2025-08-19T08:09:11.872535255Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 08:09:13.212203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364670931.mount: Deactivated successfully. Aug 19 08:09:13.217782 containerd[1576]: time="2025-08-19T08:09:13.217737278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:09:13.218550 containerd[1576]: time="2025-08-19T08:09:13.218510407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 19 08:09:13.219728 containerd[1576]: time="2025-08-19T08:09:13.219697904Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:09:13.221590 containerd[1576]: time="2025-08-19T08:09:13.221553863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:09:13.222190 containerd[1576]: time="2025-08-19T08:09:13.222141214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.349573709s" Aug 19 08:09:13.222190 containerd[1576]: time="2025-08-19T08:09:13.222182822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 19 08:09:13.222727 containerd[1576]: time="2025-08-19T08:09:13.222682208Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 19 08:09:13.746070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1868711729.mount: Deactivated successfully. Aug 19 08:09:15.085031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 19 08:09:15.088090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:15.269022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:15.278491 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:09:15.323585 kubelet[2223]: E0819 08:09:15.323510 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:09:15.327904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:09:15.328190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:09:15.328633 systemd[1]: kubelet.service: Consumed 213ms CPU time, 109.1M memory peak. Aug 19 08:09:16.205191 containerd[1576]: time="2025-08-19T08:09:16.205090018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:16.205969 containerd[1576]: time="2025-08-19T08:09:16.205899715Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Aug 19 08:09:16.207317 containerd[1576]: time="2025-08-19T08:09:16.207260617Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:16.210207 containerd[1576]: time="2025-08-19T08:09:16.210169961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:16.211112 containerd[1576]: time="2025-08-19T08:09:16.211060179Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.988348787s" Aug 19 08:09:16.211112 containerd[1576]: time="2025-08-19T08:09:16.211095085Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 19 08:09:18.491887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:18.492134 systemd[1]: kubelet.service: Consumed 213ms CPU time, 109.1M memory peak. Aug 19 08:09:18.494862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:18.527342 systemd[1]: Reload requested from client PID 2265 ('systemctl') (unit session-7.scope)... Aug 19 08:09:18.527358 systemd[1]: Reloading... Aug 19 08:09:18.617966 zram_generator::config[2311]: No configuration found. Aug 19 08:09:18.917326 systemd[1]: Reloading finished in 389 ms. Aug 19 08:09:18.980655 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 08:09:18.980754 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 08:09:18.981099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:18.981142 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.3M memory peak. Aug 19 08:09:18.982707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:19.157085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:19.168245 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:09:19.271618 kubelet[2356]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:09:19.271618 kubelet[2356]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 08:09:19.271618 kubelet[2356]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:09:19.272091 kubelet[2356]: I0819 08:09:19.271954 2356 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:09:19.664076 kubelet[2356]: I0819 08:09:19.663923 2356 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 08:09:19.664076 kubelet[2356]: I0819 08:09:19.663977 2356 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:09:19.664280 kubelet[2356]: I0819 08:09:19.664255 2356 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 08:09:19.692374 kubelet[2356]: E0819 08:09:19.692309 2356 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:19.694439 kubelet[2356]: I0819 08:09:19.694406 2356 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:09:19.700477 kubelet[2356]: I0819 08:09:19.700407 2356 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:09:19.707530 kubelet[2356]: I0819 08:09:19.707497 2356 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:09:19.708672 kubelet[2356]: I0819 08:09:19.708629 2356 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:09:19.708907 kubelet[2356]: I0819 08:09:19.708664 2356 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:09:19.709033 kubelet[2356]: I0819 08:09:19.708916 2356 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:09:19.709033 kubelet[2356]: I0819 08:09:19.708926 2356 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 08:09:19.709134 kubelet[2356]: I0819 08:09:19.709109 2356 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:09:19.711654 kubelet[2356]: I0819 08:09:19.711623 2356 kubelet.go:446] "Attempting to sync node with API server" Aug 19 08:09:19.712999 kubelet[2356]: I0819 08:09:19.712976 2356 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:09:19.713040 kubelet[2356]: I0819 08:09:19.713010 2356 kubelet.go:352] "Adding apiserver pod source" Aug 19 08:09:19.713040 kubelet[2356]: I0819 08:09:19.713024 2356 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:09:19.717751 kubelet[2356]: W0819 08:09:19.717638 2356 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Aug 19 08:09:19.717751 kubelet[2356]: E0819 08:09:19.717698 2356 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:19.718957 kubelet[2356]: I0819 08:09:19.718414 2356 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:09:19.718957 kubelet[2356]: I0819 08:09:19.718793 2356 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:09:19.719608 kubelet[2356]: W0819 08:09:19.719591 2356 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 08:09:19.719982 kubelet[2356]: W0819 08:09:19.719903 2356 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Aug 19 08:09:19.720026 kubelet[2356]: E0819 08:09:19.720001 2356 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:19.721652 kubelet[2356]: I0819 08:09:19.721634 2356 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 08:09:19.721803 kubelet[2356]: I0819 08:09:19.721790 2356 server.go:1287] "Started kubelet" Aug 19 08:09:19.724179 kubelet[2356]: I0819 08:09:19.724150 2356 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:09:19.725748 kubelet[2356]: I0819 08:09:19.725705 2356 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:09:19.726117 kubelet[2356]: I0819 08:09:19.726083 2356 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:09:19.727848 kubelet[2356]: I0819 08:09:19.727823 2356 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:09:19.728214 kubelet[2356]: I0819 08:09:19.728189 2356 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:09:19.733451 kubelet[2356]: E0819 08:09:19.727449 2356 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d1cad2519f17a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 08:09:19.721738618 +0000 UTC m=+0.495219621,LastTimestamp:2025-08-19 08:09:19.721738618 +0000 UTC m=+0.495219621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 08:09:19.733954 kubelet[2356]: I0819 08:09:19.733781 2356 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:09:19.733954 kubelet[2356]: I0819 08:09:19.733883 2356 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:09:19.734292 kubelet[2356]: I0819 08:09:19.734261 2356 server.go:479] "Adding debug handlers to kubelet server" Aug 19 08:09:19.735090 kubelet[2356]: I0819 08:09:19.735061 2356 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 08:09:19.735227 kubelet[2356]: E0819 08:09:19.735204 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:19.735606 kubelet[2356]: E0819 08:09:19.735559 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Aug 19 08:09:19.735701 kubelet[2356]: I0819 08:09:19.735672 2356 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 08:09:19.735776 kubelet[2356]: I0819 08:09:19.735757 2356 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:09:19.736015 kubelet[2356]: I0819 08:09:19.735987 2356 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:09:19.736125 kubelet[2356]: W0819 08:09:19.736083 2356 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Aug 19 08:09:19.736157 kubelet[2356]: E0819 08:09:19.736131 2356 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:19.736402 kubelet[2356]: E0819 08:09:19.736363 2356 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:09:19.747505 kubelet[2356]: I0819 08:09:19.747481 2356 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 08:09:19.747505 kubelet[2356]: I0819 08:09:19.747497 2356 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 08:09:19.747585 kubelet[2356]: I0819 08:09:19.747514 2356 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:09:19.750489 kubelet[2356]: I0819 08:09:19.750289 2356 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:09:19.752129 kubelet[2356]: I0819 08:09:19.752099 2356 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:09:19.752129 kubelet[2356]: I0819 08:09:19.752127 2356 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 08:09:19.752190 kubelet[2356]: I0819 08:09:19.752151 2356 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 08:09:19.752190 kubelet[2356]: I0819 08:09:19.752160 2356 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 08:09:19.752251 kubelet[2356]: E0819 08:09:19.752210 2356 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:09:19.757121 kubelet[2356]: W0819 08:09:19.757051 2356 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Aug 19 08:09:19.757175 kubelet[2356]: E0819 08:09:19.757124 2356 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:19.836401 kubelet[2356]: E0819 08:09:19.836318 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:19.852692 kubelet[2356]: E0819 08:09:19.852622 2356 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:09:19.936736 kubelet[2356]: E0819 08:09:19.936552 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:19.936736 kubelet[2356]: E0819 08:09:19.936719 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Aug 19 08:09:20.037194 kubelet[2356]: E0819 08:09:20.037124 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:20.053411 kubelet[2356]: E0819 08:09:20.053342 2356 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:09:20.137899 kubelet[2356]: E0819 08:09:20.137820 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:20.208855 kubelet[2356]: I0819 08:09:20.208684 2356 policy_none.go:49] "None policy: Start" Aug 19 08:09:20.208855 kubelet[2356]: I0819 08:09:20.208777 2356 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 08:09:20.208855 kubelet[2356]: I0819 08:09:20.208801 2356 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:09:20.219419 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 08:09:20.238478 kubelet[2356]: E0819 08:09:20.238416 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:20.239571 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 08:09:20.306320 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 08:09:20.331737 kubelet[2356]: I0819 08:09:20.331693 2356 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:09:20.332450 kubelet[2356]: I0819 08:09:20.332062 2356 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:09:20.332450 kubelet[2356]: I0819 08:09:20.332089 2356 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:09:20.332450 kubelet[2356]: I0819 08:09:20.332437 2356 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:09:20.333705 kubelet[2356]: E0819 08:09:20.333656 2356 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 08:09:20.333890 kubelet[2356]: E0819 08:09:20.333720 2356 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 19 08:09:20.337401 kubelet[2356]: E0819 08:09:20.337364 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Aug 19 08:09:20.435181 kubelet[2356]: I0819 08:09:20.435121 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:20.435671 kubelet[2356]: E0819 08:09:20.435630 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Aug 19 08:09:20.463335 systemd[1]: Created slice kubepods-burstable-pod450870d0e20fe1d2814b7fd455cc5a0b.slice - libcontainer container kubepods-burstable-pod450870d0e20fe1d2814b7fd455cc5a0b.slice. Aug 19 08:09:20.477995 kubelet[2356]: E0819 08:09:20.477925 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:20.481762 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Aug 19 08:09:20.483926 kubelet[2356]: E0819 08:09:20.483883 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:20.486160 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Aug 19 08:09:20.487739 kubelet[2356]: E0819 08:09:20.487708 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:20.540626 kubelet[2356]: I0819 08:09:20.540571 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:20.540626 kubelet[2356]: I0819 08:09:20.540621 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/450870d0e20fe1d2814b7fd455cc5a0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"450870d0e20fe1d2814b7fd455cc5a0b\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:20.540818 kubelet[2356]: I0819 08:09:20.540642 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/450870d0e20fe1d2814b7fd455cc5a0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"450870d0e20fe1d2814b7fd455cc5a0b\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:20.540818 kubelet[2356]: I0819 08:09:20.540662 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:20.540818 kubelet[2356]: I0819 08:09:20.540681 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:20.540818 kubelet[2356]: I0819 08:09:20.540694 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:20.540818 kubelet[2356]: I0819 08:09:20.540713 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/450870d0e20fe1d2814b7fd455cc5a0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"450870d0e20fe1d2814b7fd455cc5a0b\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:20.540979 kubelet[2356]: I0819 08:09:20.540728 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:20.540979 kubelet[2356]: I0819 08:09:20.540749 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:20.637162 kubelet[2356]: I0819 08:09:20.637128 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:20.637532 kubelet[2356]: E0819 08:09:20.637495 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Aug 19 08:09:20.810249 containerd[1576]: time="2025-08-19T08:09:20.809764257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:450870d0e20fe1d2814b7fd455cc5a0b,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:20.810602 containerd[1576]: time="2025-08-19T08:09:20.810438792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:20.810602 containerd[1576]: time="2025-08-19T08:09:20.810468938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:20.811033 kubelet[2356]: W0819 08:09:20.810977 2356 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Aug 19 08:09:20.811085 kubelet[2356]: E0819 08:09:20.811048 2356 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:20.864972 containerd[1576]: time="2025-08-19T08:09:20.863879354Z" level=info msg="connecting to shim eb3de0c47a056a69c3d5f10302ea144f43f2a27af2f8443156393f1d188037fd" address="unix:///run/containerd/s/749bfc211e1add23b3bc21b9174b40815f95e29f92a5b3e862cf7d1fea62bc72" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:20.879876 kubelet[2356]: W0819 08:09:20.879803 2356 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Aug 19 08:09:20.880022 kubelet[2356]: E0819 08:09:20.880001 2356 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:20.891957 containerd[1576]: time="2025-08-19T08:09:20.891900804Z" level=info msg="connecting to shim b64570efed9dfad8ff16fcbc0f9c6be893b89463f0e15a897e9d999c0449f835" address="unix:///run/containerd/s/574d50cef3659b7c8079524e8f8669596bd2e1c856d4f5887decbded5261681f" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:20.894803 containerd[1576]: time="2025-08-19T08:09:20.894758792Z" level=info msg="connecting to shim 84a83442a4e607bf4e06dcfff9f24f20ec5426be61accf8f50e22de2a8397a4d" address="unix:///run/containerd/s/ce33facc7335a2b239b8a406fcb795fa47842647fb854c8f7b48972094f0482e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:20.966139 systemd[1]: Started cri-containerd-b64570efed9dfad8ff16fcbc0f9c6be893b89463f0e15a897e9d999c0449f835.scope - libcontainer container b64570efed9dfad8ff16fcbc0f9c6be893b89463f0e15a897e9d999c0449f835. Aug 19 08:09:20.968065 systemd[1]: Started cri-containerd-eb3de0c47a056a69c3d5f10302ea144f43f2a27af2f8443156393f1d188037fd.scope - libcontainer container eb3de0c47a056a69c3d5f10302ea144f43f2a27af2f8443156393f1d188037fd. Aug 19 08:09:20.975270 systemd[1]: Started cri-containerd-84a83442a4e607bf4e06dcfff9f24f20ec5426be61accf8f50e22de2a8397a4d.scope - libcontainer container 84a83442a4e607bf4e06dcfff9f24f20ec5426be61accf8f50e22de2a8397a4d. Aug 19 08:09:21.038648 kubelet[2356]: I0819 08:09:21.038526 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:21.039863 kubelet[2356]: E0819 08:09:21.039815 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Aug 19 08:09:21.063457 kubelet[2356]: W0819 08:09:21.063337 2356 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Aug 19 08:09:21.063457 kubelet[2356]: E0819 08:09:21.063403 2356 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:21.124411 kubelet[2356]: W0819 08:09:21.124368 2356 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Aug 19 08:09:21.124472 kubelet[2356]: E0819 08:09:21.124415 2356 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:21.138256 kubelet[2356]: E0819 08:09:21.138195 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Aug 19 08:09:21.157916 containerd[1576]: time="2025-08-19T08:09:21.157861425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:450870d0e20fe1d2814b7fd455cc5a0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b64570efed9dfad8ff16fcbc0f9c6be893b89463f0e15a897e9d999c0449f835\"" Aug 19 08:09:21.159153 containerd[1576]: time="2025-08-19T08:09:21.159109234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb3de0c47a056a69c3d5f10302ea144f43f2a27af2f8443156393f1d188037fd\"" Aug 19 08:09:21.160492 containerd[1576]: time="2025-08-19T08:09:21.160356342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"84a83442a4e607bf4e06dcfff9f24f20ec5426be61accf8f50e22de2a8397a4d\"" Aug 19 08:09:21.161376 containerd[1576]: time="2025-08-19T08:09:21.160958160Z" level=info msg="CreateContainer within sandbox \"b64570efed9dfad8ff16fcbc0f9c6be893b89463f0e15a897e9d999c0449f835\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 08:09:21.161624 containerd[1576]: time="2025-08-19T08:09:21.161596878Z" level=info msg="CreateContainer within sandbox \"eb3de0c47a056a69c3d5f10302ea144f43f2a27af2f8443156393f1d188037fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 08:09:21.162453 containerd[1576]: time="2025-08-19T08:09:21.162425611Z" level=info msg="CreateContainer within sandbox \"84a83442a4e607bf4e06dcfff9f24f20ec5426be61accf8f50e22de2a8397a4d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 08:09:21.175641 containerd[1576]: time="2025-08-19T08:09:21.175609526Z" level=info msg="Container 5f7226404651a65a1892e6417959e4c1862c0d5f35616e2d99e7c471a4988961: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:21.176369 containerd[1576]: time="2025-08-19T08:09:21.176336038Z" level=info msg="Container 41c27a4e4504266b3990b73197edd8c3a14130ec5f857289aa92d8797c89b726: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:21.179456 containerd[1576]: time="2025-08-19T08:09:21.179411935Z" level=info msg="Container 98b6d6f1402ef4c33287196ba296fde3995177357efffd778621956f3802cc70: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:21.181115 kubelet[2356]: E0819 08:09:21.181008 2356 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d1cad2519f17a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 08:09:19.721738618 +0000 UTC m=+0.495219621,LastTimestamp:2025-08-19 08:09:19.721738618 +0000 UTC m=+0.495219621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 08:09:21.185340 containerd[1576]: time="2025-08-19T08:09:21.185279624Z" level=info msg="CreateContainer within sandbox \"eb3de0c47a056a69c3d5f10302ea144f43f2a27af2f8443156393f1d188037fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"41c27a4e4504266b3990b73197edd8c3a14130ec5f857289aa92d8797c89b726\"" Aug 19 08:09:21.185862 containerd[1576]: time="2025-08-19T08:09:21.185832601Z" level=info msg="StartContainer for \"41c27a4e4504266b3990b73197edd8c3a14130ec5f857289aa92d8797c89b726\"" Aug 19 08:09:21.187157 containerd[1576]: time="2025-08-19T08:09:21.187114003Z" level=info msg="connecting to shim 41c27a4e4504266b3990b73197edd8c3a14130ec5f857289aa92d8797c89b726" address="unix:///run/containerd/s/749bfc211e1add23b3bc21b9174b40815f95e29f92a5b3e862cf7d1fea62bc72" protocol=ttrpc version=3 Aug 19 08:09:21.192458 containerd[1576]: time="2025-08-19T08:09:21.192417726Z" level=info msg="CreateContainer within sandbox \"b64570efed9dfad8ff16fcbc0f9c6be893b89463f0e15a897e9d999c0449f835\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f7226404651a65a1892e6417959e4c1862c0d5f35616e2d99e7c471a4988961\"" Aug 19 08:09:21.192909 containerd[1576]: time="2025-08-19T08:09:21.192875334Z" level=info msg="StartContainer for \"5f7226404651a65a1892e6417959e4c1862c0d5f35616e2d99e7c471a4988961\"" Aug 19 08:09:21.194136 containerd[1576]: time="2025-08-19T08:09:21.194111381Z" level=info msg="connecting to shim 5f7226404651a65a1892e6417959e4c1862c0d5f35616e2d99e7c471a4988961" address="unix:///run/containerd/s/574d50cef3659b7c8079524e8f8669596bd2e1c856d4f5887decbded5261681f" protocol=ttrpc version=3 Aug 19 08:09:21.196150 containerd[1576]: time="2025-08-19T08:09:21.196114516Z" level=info msg="CreateContainer within sandbox \"84a83442a4e607bf4e06dcfff9f24f20ec5426be61accf8f50e22de2a8397a4d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"98b6d6f1402ef4c33287196ba296fde3995177357efffd778621956f3802cc70\"" Aug 19 08:09:21.196581 containerd[1576]: time="2025-08-19T08:09:21.196527510Z" level=info msg="StartContainer for \"98b6d6f1402ef4c33287196ba296fde3995177357efffd778621956f3802cc70\"" Aug 19 08:09:21.197570 containerd[1576]: time="2025-08-19T08:09:21.197545959Z" level=info msg="connecting to shim 98b6d6f1402ef4c33287196ba296fde3995177357efffd778621956f3802cc70" address="unix:///run/containerd/s/ce33facc7335a2b239b8a406fcb795fa47842647fb854c8f7b48972094f0482e" protocol=ttrpc version=3 Aug 19 08:09:21.229079 systemd[1]: Started cri-containerd-98b6d6f1402ef4c33287196ba296fde3995177357efffd778621956f3802cc70.scope - libcontainer container 98b6d6f1402ef4c33287196ba296fde3995177357efffd778621956f3802cc70. Aug 19 08:09:21.238091 systemd[1]: Started cri-containerd-41c27a4e4504266b3990b73197edd8c3a14130ec5f857289aa92d8797c89b726.scope - libcontainer container 41c27a4e4504266b3990b73197edd8c3a14130ec5f857289aa92d8797c89b726. Aug 19 08:09:21.239637 systemd[1]: Started cri-containerd-5f7226404651a65a1892e6417959e4c1862c0d5f35616e2d99e7c471a4988961.scope - libcontainer container 5f7226404651a65a1892e6417959e4c1862c0d5f35616e2d99e7c471a4988961. Aug 19 08:09:21.292900 containerd[1576]: time="2025-08-19T08:09:21.292791280Z" level=info msg="StartContainer for \"41c27a4e4504266b3990b73197edd8c3a14130ec5f857289aa92d8797c89b726\" returns successfully" Aug 19 08:09:21.302274 containerd[1576]: time="2025-08-19T08:09:21.302232649Z" level=info msg="StartContainer for \"98b6d6f1402ef4c33287196ba296fde3995177357efffd778621956f3802cc70\" returns successfully" Aug 19 08:09:21.304287 containerd[1576]: time="2025-08-19T08:09:21.304234562Z" level=info msg="StartContainer for \"5f7226404651a65a1892e6417959e4c1862c0d5f35616e2d99e7c471a4988961\" returns successfully" Aug 19 08:09:21.765228 kubelet[2356]: E0819 08:09:21.765186 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:21.768784 kubelet[2356]: E0819 08:09:21.768725 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:21.770519 kubelet[2356]: E0819 08:09:21.770496 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:21.843005 kubelet[2356]: I0819 08:09:21.841601 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:22.745079 kubelet[2356]: E0819 08:09:22.744980 2356 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 19 08:09:22.773030 kubelet[2356]: E0819 08:09:22.772985 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:22.774967 kubelet[2356]: E0819 08:09:22.773564 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:22.820054 kubelet[2356]: I0819 08:09:22.820000 2356 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 08:09:22.820054 kubelet[2356]: E0819 08:09:22.820040 2356 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 19 08:09:22.828505 kubelet[2356]: E0819 08:09:22.828471 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:22.929171 kubelet[2356]: E0819 08:09:22.929115 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.029921 kubelet[2356]: E0819 08:09:23.029755 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.131021 kubelet[2356]: E0819 08:09:23.130966 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.231956 kubelet[2356]: E0819 08:09:23.231919 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.332609 kubelet[2356]: E0819 08:09:23.332474 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.433266 kubelet[2356]: E0819 08:09:23.433204 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.533837 kubelet[2356]: E0819 08:09:23.533794 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.634462 kubelet[2356]: E0819 08:09:23.634366 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.734612 kubelet[2356]: E0819 08:09:23.734532 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.774750 kubelet[2356]: E0819 08:09:23.774699 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:23.797705 kubelet[2356]: E0819 08:09:23.797656 2356 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:23.835009 kubelet[2356]: E0819 08:09:23.834961 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:23.935677 kubelet[2356]: E0819 08:09:23.935524 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:24.036044 kubelet[2356]: E0819 08:09:24.035977 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:24.136642 kubelet[2356]: E0819 08:09:24.136514 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:24.237431 kubelet[2356]: E0819 08:09:24.237294 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:24.336059 kubelet[2356]: I0819 08:09:24.336012 2356 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:24.344779 kubelet[2356]: I0819 08:09:24.344729 2356 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:24.348509 kubelet[2356]: I0819 08:09:24.348483 2356 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:24.602969 systemd[1]: Reload requested from client PID 2629 ('systemctl') (unit session-7.scope)... Aug 19 08:09:24.602986 systemd[1]: Reloading... Aug 19 08:09:24.716582 kubelet[2356]: I0819 08:09:24.716514 2356 apiserver.go:52] "Watching apiserver" Aug 19 08:09:24.736696 kubelet[2356]: I0819 08:09:24.736652 2356 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 08:09:24.802526 zram_generator::config[2672]: No configuration found. Aug 19 08:09:25.036266 systemd[1]: Reloading finished in 432 ms. Aug 19 08:09:25.072732 kubelet[2356]: I0819 08:09:25.072670 2356 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:09:25.073045 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:25.098323 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 08:09:25.098648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:25.098716 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 132.9M memory peak. Aug 19 08:09:25.100784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:25.317303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:25.326324 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:09:25.364273 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:09:25.364273 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 08:09:25.364273 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:09:25.364735 kubelet[2717]: I0819 08:09:25.364308 2717 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:09:25.371584 kubelet[2717]: I0819 08:09:25.371541 2717 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 08:09:25.371584 kubelet[2717]: I0819 08:09:25.371571 2717 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:09:25.371858 kubelet[2717]: I0819 08:09:25.371834 2717 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 08:09:25.373085 kubelet[2717]: I0819 08:09:25.373065 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 08:09:25.375169 kubelet[2717]: I0819 08:09:25.375113 2717 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:09:25.379147 kubelet[2717]: I0819 08:09:25.379117 2717 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:09:25.384650 kubelet[2717]: I0819 08:09:25.384607 2717 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:09:25.388180 kubelet[2717]: I0819 08:09:25.388144 2717 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:09:25.388402 kubelet[2717]: I0819 08:09:25.388177 2717 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:09:25.388494 kubelet[2717]: I0819 08:09:25.388415 2717 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:09:25.388494 kubelet[2717]: I0819 08:09:25.388426 2717 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 08:09:25.388494 kubelet[2717]: I0819 08:09:25.388484 2717 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:09:25.388686 kubelet[2717]: I0819 08:09:25.388665 2717 kubelet.go:446] "Attempting to sync node with API server" Aug 19 08:09:25.388723 kubelet[2717]: I0819 08:09:25.388694 2717 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:09:25.388723 kubelet[2717]: I0819 08:09:25.388719 2717 kubelet.go:352] "Adding apiserver pod source" Aug 19 08:09:25.388774 kubelet[2717]: I0819 08:09:25.388731 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:09:25.390962 kubelet[2717]: I0819 08:09:25.389614 2717 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:09:25.390962 kubelet[2717]: I0819 08:09:25.389985 2717 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:09:25.390962 kubelet[2717]: I0819 08:09:25.390430 2717 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 08:09:25.390962 kubelet[2717]: I0819 08:09:25.390460 2717 server.go:1287] "Started kubelet" Aug 19 08:09:25.390962 kubelet[2717]: I0819 08:09:25.390549 2717 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:09:25.390962 kubelet[2717]: I0819 08:09:25.390684 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:09:25.391132 kubelet[2717]: I0819 08:09:25.390984 2717 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:09:25.391526 kubelet[2717]: I0819 08:09:25.391497 2717 server.go:479] "Adding debug handlers to kubelet server" Aug 19 08:09:25.393857 kubelet[2717]: I0819 08:09:25.393825 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:09:25.399575 kubelet[2717]: I0819 08:09:25.399267 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:09:25.403192 kubelet[2717]: I0819 08:09:25.402915 2717 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 08:09:25.403192 kubelet[2717]: E0819 08:09:25.403071 2717 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:09:25.403452 kubelet[2717]: I0819 08:09:25.403424 2717 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 08:09:25.403751 kubelet[2717]: I0819 08:09:25.403701 2717 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:09:25.403751 kubelet[2717]: I0819 08:09:25.404007 2717 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:09:25.403751 kubelet[2717]: I0819 08:09:25.404126 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:09:25.405775 kubelet[2717]: I0819 08:09:25.405758 2717 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:09:25.410916 kubelet[2717]: I0819 08:09:25.410871 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:09:25.412330 kubelet[2717]: I0819 08:09:25.412286 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:09:25.412330 kubelet[2717]: I0819 08:09:25.412316 2717 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 08:09:25.412330 kubelet[2717]: I0819 08:09:25.412333 2717 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 08:09:25.412487 kubelet[2717]: I0819 08:09:25.412340 2717 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 08:09:25.412487 kubelet[2717]: E0819 08:09:25.412395 2717 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:09:25.439778 kubelet[2717]: I0819 08:09:25.439749 2717 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 08:09:25.439778 kubelet[2717]: I0819 08:09:25.439764 2717 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 08:09:25.439778 kubelet[2717]: I0819 08:09:25.439782 2717 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:09:25.439997 kubelet[2717]: I0819 08:09:25.439972 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 08:09:25.440033 kubelet[2717]: I0819 08:09:25.439991 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 08:09:25.440033 kubelet[2717]: I0819 08:09:25.440010 2717 policy_none.go:49] "None policy: Start" Aug 19 08:09:25.440033 kubelet[2717]: I0819 08:09:25.440019 2717 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 08:09:25.440033 kubelet[2717]: I0819 08:09:25.440030 2717 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:09:25.440157 kubelet[2717]: I0819 08:09:25.440141 2717 state_mem.go:75] "Updated machine memory state" Aug 19 08:09:25.443914 kubelet[2717]: I0819 08:09:25.443885 2717 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:09:25.444218 kubelet[2717]: I0819 08:09:25.444061 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:09:25.444218 kubelet[2717]: I0819 08:09:25.444077 2717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:09:25.444290 kubelet[2717]: I0819 08:09:25.444227 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:09:25.445345 kubelet[2717]: E0819 08:09:25.445315 2717 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 08:09:25.513621 kubelet[2717]: I0819 08:09:25.513557 2717 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:25.513768 kubelet[2717]: I0819 08:09:25.513661 2717 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:25.514015 kubelet[2717]: I0819 08:09:25.513971 2717 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:25.519809 kubelet[2717]: E0819 08:09:25.519732 2717 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:25.520288 kubelet[2717]: E0819 08:09:25.520240 2717 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:25.520453 kubelet[2717]: E0819 08:09:25.520320 2717 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:25.548327 kubelet[2717]: I0819 08:09:25.548295 2717 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:25.553352 kubelet[2717]: I0819 08:09:25.553317 2717 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 19 08:09:25.553407 kubelet[2717]: I0819 08:09:25.553384 2717 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 08:09:25.705684 kubelet[2717]: I0819 08:09:25.705497 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/450870d0e20fe1d2814b7fd455cc5a0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"450870d0e20fe1d2814b7fd455cc5a0b\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:25.705684 kubelet[2717]: I0819 08:09:25.705550 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/450870d0e20fe1d2814b7fd455cc5a0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"450870d0e20fe1d2814b7fd455cc5a0b\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:25.705867 kubelet[2717]: I0819 08:09:25.705599 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:25.706010 kubelet[2717]: I0819 08:09:25.705981 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:25.706058 kubelet[2717]: I0819 08:09:25.706037 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:25.706123 kubelet[2717]: I0819 08:09:25.706068 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:25.706123 kubelet[2717]: I0819 08:09:25.706105 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:25.707021 kubelet[2717]: I0819 08:09:25.706124 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/450870d0e20fe1d2814b7fd455cc5a0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"450870d0e20fe1d2814b7fd455cc5a0b\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:25.707021 kubelet[2717]: I0819 08:09:25.706161 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:26.389765 kubelet[2717]: I0819 08:09:26.389720 2717 apiserver.go:52] "Watching apiserver" Aug 19 08:09:26.403723 kubelet[2717]: I0819 08:09:26.403672 2717 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 08:09:26.427886 kubelet[2717]: I0819 08:09:26.427846 2717 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:26.428273 kubelet[2717]: I0819 08:09:26.428047 2717 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:26.521628 kubelet[2717]: E0819 08:09:26.521565 2717 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:26.521794 kubelet[2717]: E0819 08:09:26.521570 2717 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:26.539772 kubelet[2717]: I0819 08:09:26.539677 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.539647723 podStartE2EDuration="2.539647723s" podCreationTimestamp="2025-08-19 08:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:26.538810461 +0000 UTC m=+1.208490096" watchObservedRunningTime="2025-08-19 08:09:26.539647723 +0000 UTC m=+1.209327368" Aug 19 08:09:26.546873 kubelet[2717]: I0819 08:09:26.546818 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.546800193 podStartE2EDuration="2.546800193s" podCreationTimestamp="2025-08-19 08:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:26.546111807 +0000 UTC m=+1.215791422" watchObservedRunningTime="2025-08-19 08:09:26.546800193 +0000 UTC m=+1.216479818" Aug 19 08:09:26.562460 kubelet[2717]: I0819 08:09:26.562386 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.562362927 podStartE2EDuration="2.562362927s" podCreationTimestamp="2025-08-19 08:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:26.554717452 +0000 UTC m=+1.224397077" watchObservedRunningTime="2025-08-19 08:09:26.562362927 +0000 UTC m=+1.232042552" Aug 19 08:09:30.896051 kubelet[2717]: I0819 08:09:30.895997 2717 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 08:09:30.896589 containerd[1576]: time="2025-08-19T08:09:30.896406822Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 08:09:30.896834 kubelet[2717]: I0819 08:09:30.896706 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 08:09:31.584785 systemd[1]: Created slice kubepods-besteffort-pod4088e188_09ad_498e_86a1_5edb0c65b88d.slice - libcontainer container kubepods-besteffort-pod4088e188_09ad_498e_86a1_5edb0c65b88d.slice. Aug 19 08:09:31.644217 kubelet[2717]: I0819 08:09:31.644143 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4088e188-09ad-498e-86a1-5edb0c65b88d-kube-proxy\") pod \"kube-proxy-4kpbk\" (UID: \"4088e188-09ad-498e-86a1-5edb0c65b88d\") " pod="kube-system/kube-proxy-4kpbk" Aug 19 08:09:31.644217 kubelet[2717]: I0819 08:09:31.644180 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4088e188-09ad-498e-86a1-5edb0c65b88d-xtables-lock\") pod \"kube-proxy-4kpbk\" (UID: \"4088e188-09ad-498e-86a1-5edb0c65b88d\") " pod="kube-system/kube-proxy-4kpbk" Aug 19 08:09:31.644217 kubelet[2717]: I0819 08:09:31.644199 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4088e188-09ad-498e-86a1-5edb0c65b88d-lib-modules\") pod \"kube-proxy-4kpbk\" (UID: \"4088e188-09ad-498e-86a1-5edb0c65b88d\") " pod="kube-system/kube-proxy-4kpbk" Aug 19 08:09:31.644217 kubelet[2717]: I0819 08:09:31.644227 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wln8s\" (UniqueName: \"kubernetes.io/projected/4088e188-09ad-498e-86a1-5edb0c65b88d-kube-api-access-wln8s\") pod \"kube-proxy-4kpbk\" (UID: \"4088e188-09ad-498e-86a1-5edb0c65b88d\") " pod="kube-system/kube-proxy-4kpbk" Aug 19 08:09:31.895657 containerd[1576]: time="2025-08-19T08:09:31.895505202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kpbk,Uid:4088e188-09ad-498e-86a1-5edb0c65b88d,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:31.921339 containerd[1576]: time="2025-08-19T08:09:31.921261009Z" level=info msg="connecting to shim 71598c018e36a80df3ca6b8fff2de69cbea5ed59cc27c4d6e019ab3e5fd18857" address="unix:///run/containerd/s/16555f7aab7349d3617705e1ec90d8bdb2d849cdda6ffd78145e2652668d7ad7" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:31.960823 systemd[1]: Started cri-containerd-71598c018e36a80df3ca6b8fff2de69cbea5ed59cc27c4d6e019ab3e5fd18857.scope - libcontainer container 71598c018e36a80df3ca6b8fff2de69cbea5ed59cc27c4d6e019ab3e5fd18857. Aug 19 08:09:32.023434 systemd[1]: Created slice kubepods-besteffort-pod5de598f7_0036_4f6f_aa65_1b7b302dd397.slice - libcontainer container kubepods-besteffort-pod5de598f7_0036_4f6f_aa65_1b7b302dd397.slice. Aug 19 08:09:32.027227 containerd[1576]: time="2025-08-19T08:09:32.027149882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kpbk,Uid:4088e188-09ad-498e-86a1-5edb0c65b88d,Namespace:kube-system,Attempt:0,} returns sandbox id \"71598c018e36a80df3ca6b8fff2de69cbea5ed59cc27c4d6e019ab3e5fd18857\"" Aug 19 08:09:32.032423 containerd[1576]: time="2025-08-19T08:09:32.032380805Z" level=info msg="CreateContainer within sandbox \"71598c018e36a80df3ca6b8fff2de69cbea5ed59cc27c4d6e019ab3e5fd18857\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 08:09:32.044722 containerd[1576]: time="2025-08-19T08:09:32.044681191Z" level=info msg="Container 8b3307f1fe11e468f3bf012193d00005ad99f2746445cbb9cdcafbad2bfe9c89: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:32.047831 kubelet[2717]: I0819 08:09:32.047789 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5de598f7-0036-4f6f-aa65-1b7b302dd397-var-lib-calico\") pod \"tigera-operator-747864d56d-g9g5q\" (UID: \"5de598f7-0036-4f6f-aa65-1b7b302dd397\") " pod="tigera-operator/tigera-operator-747864d56d-g9g5q" Aug 19 08:09:32.048166 kubelet[2717]: I0819 08:09:32.047850 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc6nt\" (UniqueName: \"kubernetes.io/projected/5de598f7-0036-4f6f-aa65-1b7b302dd397-kube-api-access-rc6nt\") pod \"tigera-operator-747864d56d-g9g5q\" (UID: \"5de598f7-0036-4f6f-aa65-1b7b302dd397\") " pod="tigera-operator/tigera-operator-747864d56d-g9g5q" Aug 19 08:09:32.052586 containerd[1576]: time="2025-08-19T08:09:32.052549320Z" level=info msg="CreateContainer within sandbox \"71598c018e36a80df3ca6b8fff2de69cbea5ed59cc27c4d6e019ab3e5fd18857\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b3307f1fe11e468f3bf012193d00005ad99f2746445cbb9cdcafbad2bfe9c89\"" Aug 19 08:09:32.053108 containerd[1576]: time="2025-08-19T08:09:32.053064038Z" level=info msg="StartContainer for \"8b3307f1fe11e468f3bf012193d00005ad99f2746445cbb9cdcafbad2bfe9c89\"" Aug 19 08:09:32.054636 containerd[1576]: time="2025-08-19T08:09:32.054607500Z" level=info msg="connecting to shim 8b3307f1fe11e468f3bf012193d00005ad99f2746445cbb9cdcafbad2bfe9c89" address="unix:///run/containerd/s/16555f7aab7349d3617705e1ec90d8bdb2d849cdda6ffd78145e2652668d7ad7" protocol=ttrpc version=3 Aug 19 08:09:32.080071 systemd[1]: Started cri-containerd-8b3307f1fe11e468f3bf012193d00005ad99f2746445cbb9cdcafbad2bfe9c89.scope - libcontainer container 8b3307f1fe11e468f3bf012193d00005ad99f2746445cbb9cdcafbad2bfe9c89. Aug 19 08:09:32.127512 containerd[1576]: time="2025-08-19T08:09:32.127450703Z" level=info msg="StartContainer for \"8b3307f1fe11e468f3bf012193d00005ad99f2746445cbb9cdcafbad2bfe9c89\" returns successfully" Aug 19 08:09:32.331410 containerd[1576]: time="2025-08-19T08:09:32.331243360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-g9g5q,Uid:5de598f7-0036-4f6f-aa65-1b7b302dd397,Namespace:tigera-operator,Attempt:0,}" Aug 19 08:09:32.358268 containerd[1576]: time="2025-08-19T08:09:32.358180369Z" level=info msg="connecting to shim d6925460d1cb02de055beb4c7222aad059a36d27135f528ccce4d5312646f895" address="unix:///run/containerd/s/5240dd529b6394aab443bfb50c0b53305bc479e99bf069f071bcfbefa332a782" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:32.390159 systemd[1]: Started cri-containerd-d6925460d1cb02de055beb4c7222aad059a36d27135f528ccce4d5312646f895.scope - libcontainer container d6925460d1cb02de055beb4c7222aad059a36d27135f528ccce4d5312646f895. Aug 19 08:09:32.442272 containerd[1576]: time="2025-08-19T08:09:32.442214964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-g9g5q,Uid:5de598f7-0036-4f6f-aa65-1b7b302dd397,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d6925460d1cb02de055beb4c7222aad059a36d27135f528ccce4d5312646f895\"" Aug 19 08:09:32.444457 containerd[1576]: time="2025-08-19T08:09:32.444421590Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 19 08:09:32.451855 kubelet[2717]: I0819 08:09:32.451650 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4kpbk" podStartSLOduration=1.4516236 podStartE2EDuration="1.4516236s" podCreationTimestamp="2025-08-19 08:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:32.450977999 +0000 UTC m=+7.120657624" watchObservedRunningTime="2025-08-19 08:09:32.4516236 +0000 UTC m=+7.121303226" Aug 19 08:09:34.270586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690576562.mount: Deactivated successfully. Aug 19 08:09:35.176820 containerd[1576]: time="2025-08-19T08:09:35.176755688Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:35.177442 containerd[1576]: time="2025-08-19T08:09:35.177389139Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 19 08:09:35.178485 containerd[1576]: time="2025-08-19T08:09:35.178449204Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:35.180368 containerd[1576]: time="2025-08-19T08:09:35.180334576Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:35.180947 containerd[1576]: time="2025-08-19T08:09:35.180897695Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.736353566s" Aug 19 08:09:35.180981 containerd[1576]: time="2025-08-19T08:09:35.180961454Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 19 08:09:35.183048 containerd[1576]: time="2025-08-19T08:09:35.183015721Z" level=info msg="CreateContainer within sandbox \"d6925460d1cb02de055beb4c7222aad059a36d27135f528ccce4d5312646f895\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 19 08:09:35.191838 containerd[1576]: time="2025-08-19T08:09:35.191797673Z" level=info msg="Container a670f7932fae8e16bd189c5a24920ead597d8165d421d1141735bdb9763ff9a9: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:35.199083 containerd[1576]: time="2025-08-19T08:09:35.199048201Z" level=info msg="CreateContainer within sandbox \"d6925460d1cb02de055beb4c7222aad059a36d27135f528ccce4d5312646f895\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a670f7932fae8e16bd189c5a24920ead597d8165d421d1141735bdb9763ff9a9\"" Aug 19 08:09:35.202070 containerd[1576]: time="2025-08-19T08:09:35.202033593Z" level=info msg="StartContainer for \"a670f7932fae8e16bd189c5a24920ead597d8165d421d1141735bdb9763ff9a9\"" Aug 19 08:09:35.202851 containerd[1576]: time="2025-08-19T08:09:35.202818505Z" level=info msg="connecting to shim a670f7932fae8e16bd189c5a24920ead597d8165d421d1141735bdb9763ff9a9" address="unix:///run/containerd/s/5240dd529b6394aab443bfb50c0b53305bc479e99bf069f071bcfbefa332a782" protocol=ttrpc version=3 Aug 19 08:09:35.252079 systemd[1]: Started cri-containerd-a670f7932fae8e16bd189c5a24920ead597d8165d421d1141735bdb9763ff9a9.scope - libcontainer container a670f7932fae8e16bd189c5a24920ead597d8165d421d1141735bdb9763ff9a9. Aug 19 08:09:35.283310 containerd[1576]: time="2025-08-19T08:09:35.283266115Z" level=info msg="StartContainer for \"a670f7932fae8e16bd189c5a24920ead597d8165d421d1141735bdb9763ff9a9\" returns successfully" Aug 19 08:09:36.214663 kubelet[2717]: I0819 08:09:36.214571 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-g9g5q" podStartSLOduration=2.47635661 podStartE2EDuration="5.21454917s" podCreationTimestamp="2025-08-19 08:09:31 +0000 UTC" firstStartedPulling="2025-08-19 08:09:32.443525293 +0000 UTC m=+7.113204918" lastFinishedPulling="2025-08-19 08:09:35.181717863 +0000 UTC m=+9.851397478" observedRunningTime="2025-08-19 08:09:35.45787222 +0000 UTC m=+10.127551835" watchObservedRunningTime="2025-08-19 08:09:36.21454917 +0000 UTC m=+10.884228795" Aug 19 08:09:36.363165 update_engine[1563]: I20250819 08:09:36.363046 1563 update_attempter.cc:509] Updating boot flags... Aug 19 08:09:40.526314 sudo[1783]: pam_unix(sudo:session): session closed for user root Aug 19 08:09:40.527821 sshd[1782]: Connection closed by 10.0.0.1 port 49088 Aug 19 08:09:40.530260 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Aug 19 08:09:40.537694 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:49088.service: Deactivated successfully. Aug 19 08:09:40.543886 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 08:09:40.544378 systemd[1]: session-7.scope: Consumed 4.949s CPU time, 227.2M memory peak. Aug 19 08:09:40.546413 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Aug 19 08:09:40.550930 systemd-logind[1555]: Removed session 7. Aug 19 08:09:43.044966 systemd[1]: Created slice kubepods-besteffort-podf81a677d_3a72_41e3_a3b9_51e034833831.slice - libcontainer container kubepods-besteffort-podf81a677d_3a72_41e3_a3b9_51e034833831.slice. Aug 19 08:09:43.128819 kubelet[2717]: I0819 08:09:43.128629 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81a677d-3a72-41e3-a3b9-51e034833831-tigera-ca-bundle\") pod \"calico-typha-85566df5dc-5qs7r\" (UID: \"f81a677d-3a72-41e3-a3b9-51e034833831\") " pod="calico-system/calico-typha-85566df5dc-5qs7r" Aug 19 08:09:43.129748 kubelet[2717]: I0819 08:09:43.128928 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f81a677d-3a72-41e3-a3b9-51e034833831-typha-certs\") pod \"calico-typha-85566df5dc-5qs7r\" (UID: \"f81a677d-3a72-41e3-a3b9-51e034833831\") " pod="calico-system/calico-typha-85566df5dc-5qs7r" Aug 19 08:09:43.129748 kubelet[2717]: I0819 08:09:43.129127 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9nq5\" (UniqueName: \"kubernetes.io/projected/f81a677d-3a72-41e3-a3b9-51e034833831-kube-api-access-q9nq5\") pod \"calico-typha-85566df5dc-5qs7r\" (UID: \"f81a677d-3a72-41e3-a3b9-51e034833831\") " pod="calico-system/calico-typha-85566df5dc-5qs7r" Aug 19 08:09:43.351272 containerd[1576]: time="2025-08-19T08:09:43.351101414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85566df5dc-5qs7r,Uid:f81a677d-3a72-41e3-a3b9-51e034833831,Namespace:calico-system,Attempt:0,}" Aug 19 08:09:43.620405 systemd[1]: Created slice kubepods-besteffort-podc4c70585_99f6_4da1_9478_956e1e8b6d51.slice - libcontainer container kubepods-besteffort-podc4c70585_99f6_4da1_9478_956e1e8b6d51.slice. Aug 19 08:09:43.640575 containerd[1576]: time="2025-08-19T08:09:43.640507528Z" level=info msg="connecting to shim cda25b8368089de7e32ada01e64f331ad96db90b44e8a899301b91e4b0b6a42d" address="unix:///run/containerd/s/b60ef56a760236dd917295b8454ce7572591a79d3f30fb64f987b05f17133ef8" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:43.680093 systemd[1]: Started cri-containerd-cda25b8368089de7e32ada01e64f331ad96db90b44e8a899301b91e4b0b6a42d.scope - libcontainer container cda25b8368089de7e32ada01e64f331ad96db90b44e8a899301b91e4b0b6a42d. Aug 19 08:09:43.726325 kubelet[2717]: E0819 08:09:43.726252 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47j5b" podUID="d91662d8-9a6c-4379-a701-086161e4c200" Aug 19 08:09:43.733264 kubelet[2717]: I0819 08:09:43.733220 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-cni-bin-dir\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733264 kubelet[2717]: I0819 08:09:43.733259 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-cni-log-dir\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733393 kubelet[2717]: I0819 08:09:43.733276 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-var-lib-calico\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733393 kubelet[2717]: I0819 08:09:43.733291 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-var-run-calico\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733393 kubelet[2717]: I0819 08:09:43.733306 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-flexvol-driver-host\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733393 kubelet[2717]: I0819 08:09:43.733323 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c70585-99f6-4da1-9478-956e1e8b6d51-tigera-ca-bundle\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733393 kubelet[2717]: I0819 08:09:43.733339 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-lib-modules\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733531 kubelet[2717]: I0819 08:09:43.733353 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4c70585-99f6-4da1-9478-956e1e8b6d51-node-certs\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733531 kubelet[2717]: I0819 08:09:43.733368 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knffk\" (UniqueName: \"kubernetes.io/projected/c4c70585-99f6-4da1-9478-956e1e8b6d51-kube-api-access-knffk\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733531 kubelet[2717]: I0819 08:09:43.733382 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-policysync\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733531 kubelet[2717]: I0819 08:09:43.733398 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-xtables-lock\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.733531 kubelet[2717]: I0819 08:09:43.733414 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4c70585-99f6-4da1-9478-956e1e8b6d51-cni-net-dir\") pod \"calico-node-v5d6m\" (UID: \"c4c70585-99f6-4da1-9478-956e1e8b6d51\") " pod="calico-system/calico-node-v5d6m" Aug 19 08:09:43.739561 containerd[1576]: time="2025-08-19T08:09:43.739520069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85566df5dc-5qs7r,Uid:f81a677d-3a72-41e3-a3b9-51e034833831,Namespace:calico-system,Attempt:0,} returns sandbox id \"cda25b8368089de7e32ada01e64f331ad96db90b44e8a899301b91e4b0b6a42d\"" Aug 19 08:09:43.741373 containerd[1576]: time="2025-08-19T08:09:43.741328025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 19 08:09:43.834379 kubelet[2717]: I0819 08:09:43.834248 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d91662d8-9a6c-4379-a701-086161e4c200-registration-dir\") pod \"csi-node-driver-47j5b\" (UID: \"d91662d8-9a6c-4379-a701-086161e4c200\") " pod="calico-system/csi-node-driver-47j5b" Aug 19 08:09:43.834379 kubelet[2717]: I0819 08:09:43.834308 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dggv\" (UniqueName: \"kubernetes.io/projected/d91662d8-9a6c-4379-a701-086161e4c200-kube-api-access-2dggv\") pod \"csi-node-driver-47j5b\" (UID: \"d91662d8-9a6c-4379-a701-086161e4c200\") " pod="calico-system/csi-node-driver-47j5b" Aug 19 08:09:43.834379 kubelet[2717]: I0819 08:09:43.834326 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d91662d8-9a6c-4379-a701-086161e4c200-varrun\") pod \"csi-node-driver-47j5b\" (UID: \"d91662d8-9a6c-4379-a701-086161e4c200\") " pod="calico-system/csi-node-driver-47j5b" Aug 19 08:09:43.834684 kubelet[2717]: I0819 08:09:43.834433 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d91662d8-9a6c-4379-a701-086161e4c200-socket-dir\") pod \"csi-node-driver-47j5b\" (UID: \"d91662d8-9a6c-4379-a701-086161e4c200\") " pod="calico-system/csi-node-driver-47j5b" Aug 19 08:09:43.834684 kubelet[2717]: I0819 08:09:43.834478 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d91662d8-9a6c-4379-a701-086161e4c200-kubelet-dir\") pod \"csi-node-driver-47j5b\" (UID: \"d91662d8-9a6c-4379-a701-086161e4c200\") " pod="calico-system/csi-node-driver-47j5b" Aug 19 08:09:43.836164 kubelet[2717]: E0819 08:09:43.836115 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.836164 kubelet[2717]: W0819 08:09:43.836144 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.836339 kubelet[2717]: E0819 08:09:43.836185 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.838540 kubelet[2717]: E0819 08:09:43.838324 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.838540 kubelet[2717]: W0819 08:09:43.838345 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.838540 kubelet[2717]: E0819 08:09:43.838363 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.843121 kubelet[2717]: E0819 08:09:43.843077 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.843121 kubelet[2717]: W0819 08:09:43.843097 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.843121 kubelet[2717]: E0819 08:09:43.843126 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.926122 containerd[1576]: time="2025-08-19T08:09:43.926059396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5d6m,Uid:c4c70585-99f6-4da1-9478-956e1e8b6d51,Namespace:calico-system,Attempt:0,}" Aug 19 08:09:43.936145 kubelet[2717]: E0819 08:09:43.936100 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.936145 kubelet[2717]: W0819 08:09:43.936135 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.936307 kubelet[2717]: E0819 08:09:43.936166 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.936542 kubelet[2717]: E0819 08:09:43.936525 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.936542 kubelet[2717]: W0819 08:09:43.936537 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.936608 kubelet[2717]: E0819 08:09:43.936555 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.936965 kubelet[2717]: E0819 08:09:43.936909 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.936965 kubelet[2717]: W0819 08:09:43.936952 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.937035 kubelet[2717]: E0819 08:09:43.936992 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.937561 kubelet[2717]: E0819 08:09:43.937536 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.937561 kubelet[2717]: W0819 08:09:43.937549 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.937814 kubelet[2717]: E0819 08:09:43.937578 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.937879 kubelet[2717]: E0819 08:09:43.937829 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.937879 kubelet[2717]: W0819 08:09:43.937841 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.938022 kubelet[2717]: E0819 08:09:43.937973 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.938364 kubelet[2717]: E0819 08:09:43.938085 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.938364 kubelet[2717]: W0819 08:09:43.938179 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.938364 kubelet[2717]: E0819 08:09:43.938226 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.938475 kubelet[2717]: E0819 08:09:43.938451 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.938475 kubelet[2717]: W0819 08:09:43.938461 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.938525 kubelet[2717]: E0819 08:09:43.938498 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.938703 kubelet[2717]: E0819 08:09:43.938685 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.938703 kubelet[2717]: W0819 08:09:43.938698 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.938703 kubelet[2717]: E0819 08:09:43.938713 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.938703 kubelet[2717]: E0819 08:09:43.938952 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.938703 kubelet[2717]: W0819 08:09:43.938961 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.938703 kubelet[2717]: E0819 08:09:43.938975 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.938703 kubelet[2717]: E0819 08:09:43.939173 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.938703 kubelet[2717]: W0819 08:09:43.939186 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.938703 kubelet[2717]: E0819 08:09:43.939205 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.938703 kubelet[2717]: E0819 08:09:43.939396 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.939755 kubelet[2717]: W0819 08:09:43.939405 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.939755 kubelet[2717]: E0819 08:09:43.939423 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.939755 kubelet[2717]: E0819 08:09:43.939623 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.939755 kubelet[2717]: W0819 08:09:43.939633 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.939755 kubelet[2717]: E0819 08:09:43.939650 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.939958 kubelet[2717]: E0819 08:09:43.939925 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.939958 kubelet[2717]: W0819 08:09:43.939948 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.940010 kubelet[2717]: E0819 08:09:43.939980 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.940274 kubelet[2717]: E0819 08:09:43.940252 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.940274 kubelet[2717]: W0819 08:09:43.940268 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.940333 kubelet[2717]: E0819 08:09:43.940299 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.940468 kubelet[2717]: E0819 08:09:43.940451 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.940468 kubelet[2717]: W0819 08:09:43.940462 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.940994 kubelet[2717]: E0819 08:09:43.940486 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.940994 kubelet[2717]: E0819 08:09:43.940660 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.940994 kubelet[2717]: W0819 08:09:43.940669 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.940994 kubelet[2717]: E0819 08:09:43.940703 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.940994 kubelet[2717]: E0819 08:09:43.940849 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.940994 kubelet[2717]: W0819 08:09:43.940867 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.940994 kubelet[2717]: E0819 08:09:43.940882 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.941174 kubelet[2717]: E0819 08:09:43.941122 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.941174 kubelet[2717]: W0819 08:09:43.941130 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.941174 kubelet[2717]: E0819 08:09:43.941146 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.941344 kubelet[2717]: E0819 08:09:43.941307 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.941344 kubelet[2717]: W0819 08:09:43.941321 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.941344 kubelet[2717]: E0819 08:09:43.941334 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.941536 kubelet[2717]: E0819 08:09:43.941518 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.941536 kubelet[2717]: W0819 08:09:43.941530 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.941581 kubelet[2717]: E0819 08:09:43.941545 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.941766 kubelet[2717]: E0819 08:09:43.941748 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.941766 kubelet[2717]: W0819 08:09:43.941759 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.942377 kubelet[2717]: E0819 08:09:43.941772 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.942377 kubelet[2717]: E0819 08:09:43.941976 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.942377 kubelet[2717]: W0819 08:09:43.941986 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.942377 kubelet[2717]: E0819 08:09:43.942001 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.942377 kubelet[2717]: E0819 08:09:43.942190 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.942377 kubelet[2717]: W0819 08:09:43.942202 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.942377 kubelet[2717]: E0819 08:09:43.942246 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.942562 kubelet[2717]: E0819 08:09:43.942554 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.942591 kubelet[2717]: W0819 08:09:43.942563 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.942591 kubelet[2717]: E0819 08:09:43.942579 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.942830 kubelet[2717]: E0819 08:09:43.942807 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.942830 kubelet[2717]: W0819 08:09:43.942824 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.942894 kubelet[2717]: E0819 08:09:43.942834 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.951907 containerd[1576]: time="2025-08-19T08:09:43.951848965Z" level=info msg="connecting to shim cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d" address="unix:///run/containerd/s/04eee60702d81f642af8b994a4b8756d48edda21a5739b10b9f7eb8ea576aa59" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:43.955051 kubelet[2717]: E0819 08:09:43.954991 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:43.955051 kubelet[2717]: W0819 08:09:43.955015 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:43.955222 kubelet[2717]: E0819 08:09:43.955145 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:43.979171 systemd[1]: Started cri-containerd-cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d.scope - libcontainer container cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d. Aug 19 08:09:44.013825 containerd[1576]: time="2025-08-19T08:09:44.013759913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5d6m,Uid:c4c70585-99f6-4da1-9478-956e1e8b6d51,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d\"" Aug 19 08:09:45.413365 kubelet[2717]: E0819 08:09:45.413304 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47j5b" podUID="d91662d8-9a6c-4379-a701-086161e4c200" Aug 19 08:09:46.781616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716581629.mount: Deactivated successfully. Aug 19 08:09:47.111258 containerd[1576]: time="2025-08-19T08:09:47.111092328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:47.111925 containerd[1576]: time="2025-08-19T08:09:47.111887235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 19 08:09:47.113049 containerd[1576]: time="2025-08-19T08:09:47.113000245Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:47.114736 containerd[1576]: time="2025-08-19T08:09:47.114705884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:47.115269 containerd[1576]: time="2025-08-19T08:09:47.115235413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.373858086s" Aug 19 08:09:47.115269 containerd[1576]: time="2025-08-19T08:09:47.115265309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 19 08:09:47.116594 containerd[1576]: time="2025-08-19T08:09:47.116561993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 19 08:09:47.125543 containerd[1576]: time="2025-08-19T08:09:47.125481817Z" level=info msg="CreateContainer within sandbox \"cda25b8368089de7e32ada01e64f331ad96db90b44e8a899301b91e4b0b6a42d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 19 08:09:47.135427 containerd[1576]: time="2025-08-19T08:09:47.135383867Z" level=info msg="Container 15ef8585dbdaaf9cdec7d55dee699142b17b47480571251eddf62fc4fa156e16: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:47.142994 containerd[1576]: time="2025-08-19T08:09:47.142932738Z" level=info msg="CreateContainer within sandbox \"cda25b8368089de7e32ada01e64f331ad96db90b44e8a899301b91e4b0b6a42d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"15ef8585dbdaaf9cdec7d55dee699142b17b47480571251eddf62fc4fa156e16\"" Aug 19 08:09:47.143590 containerd[1576]: time="2025-08-19T08:09:47.143355327Z" level=info msg="StartContainer for \"15ef8585dbdaaf9cdec7d55dee699142b17b47480571251eddf62fc4fa156e16\"" Aug 19 08:09:47.144523 containerd[1576]: time="2025-08-19T08:09:47.144492914Z" level=info msg="connecting to shim 15ef8585dbdaaf9cdec7d55dee699142b17b47480571251eddf62fc4fa156e16" address="unix:///run/containerd/s/b60ef56a760236dd917295b8454ce7572591a79d3f30fb64f987b05f17133ef8" protocol=ttrpc version=3 Aug 19 08:09:47.168117 systemd[1]: Started cri-containerd-15ef8585dbdaaf9cdec7d55dee699142b17b47480571251eddf62fc4fa156e16.scope - libcontainer container 15ef8585dbdaaf9cdec7d55dee699142b17b47480571251eddf62fc4fa156e16. Aug 19 08:09:47.413546 kubelet[2717]: E0819 08:09:47.413415 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47j5b" podUID="d91662d8-9a6c-4379-a701-086161e4c200" Aug 19 08:09:47.479660 containerd[1576]: time="2025-08-19T08:09:47.479617106Z" level=info msg="StartContainer for \"15ef8585dbdaaf9cdec7d55dee699142b17b47480571251eddf62fc4fa156e16\" returns successfully" Aug 19 08:09:48.494984 kubelet[2717]: I0819 08:09:48.494882 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85566df5dc-5qs7r" podStartSLOduration=2.119539247 podStartE2EDuration="5.494860998s" podCreationTimestamp="2025-08-19 08:09:43 +0000 UTC" firstStartedPulling="2025-08-19 08:09:43.740847428 +0000 UTC m=+18.410527053" lastFinishedPulling="2025-08-19 08:09:47.116169179 +0000 UTC m=+21.785848804" observedRunningTime="2025-08-19 08:09:48.494016349 +0000 UTC m=+23.163695994" watchObservedRunningTime="2025-08-19 08:09:48.494860998 +0000 UTC m=+23.164540623" Aug 19 08:09:48.528396 kubelet[2717]: E0819 08:09:48.528323 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.528736 kubelet[2717]: W0819 08:09:48.528468 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.528736 kubelet[2717]: E0819 08:09:48.528495 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.528856 kubelet[2717]: E0819 08:09:48.528820 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.528898 kubelet[2717]: W0819 08:09:48.528855 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.528898 kubelet[2717]: E0819 08:09:48.528889 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.529219 kubelet[2717]: E0819 08:09:48.529185 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.529219 kubelet[2717]: W0819 08:09:48.529200 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.529219 kubelet[2717]: E0819 08:09:48.529210 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.529538 kubelet[2717]: E0819 08:09:48.529519 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.529538 kubelet[2717]: W0819 08:09:48.529535 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.529636 kubelet[2717]: E0819 08:09:48.529545 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.530302 kubelet[2717]: E0819 08:09:48.530209 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.530302 kubelet[2717]: W0819 08:09:48.530300 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.530405 kubelet[2717]: E0819 08:09:48.530365 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.530971 kubelet[2717]: E0819 08:09:48.530623 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.530971 kubelet[2717]: W0819 08:09:48.530633 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.530971 kubelet[2717]: E0819 08:09:48.530645 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.530971 kubelet[2717]: E0819 08:09:48.530881 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.530971 kubelet[2717]: W0819 08:09:48.530898 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.530971 kubelet[2717]: E0819 08:09:48.530963 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.531290 kubelet[2717]: E0819 08:09:48.531177 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.531290 kubelet[2717]: W0819 08:09:48.531187 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.531290 kubelet[2717]: E0819 08:09:48.531199 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.531401 kubelet[2717]: E0819 08:09:48.531391 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.531440 kubelet[2717]: W0819 08:09:48.531402 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.531440 kubelet[2717]: E0819 08:09:48.531413 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.531709 kubelet[2717]: E0819 08:09:48.531681 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.531709 kubelet[2717]: W0819 08:09:48.531697 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.531825 kubelet[2717]: E0819 08:09:48.531721 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.532003 kubelet[2717]: E0819 08:09:48.531981 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.532003 kubelet[2717]: W0819 08:09:48.531995 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.532079 kubelet[2717]: E0819 08:09:48.532008 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.532257 kubelet[2717]: E0819 08:09:48.532227 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.532257 kubelet[2717]: W0819 08:09:48.532239 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.532257 kubelet[2717]: E0819 08:09:48.532248 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.532483 kubelet[2717]: E0819 08:09:48.532469 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.532483 kubelet[2717]: W0819 08:09:48.532478 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.532559 kubelet[2717]: E0819 08:09:48.532488 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.532731 kubelet[2717]: E0819 08:09:48.532681 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.532731 kubelet[2717]: W0819 08:09:48.532693 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.532731 kubelet[2717]: E0819 08:09:48.532710 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.532905 kubelet[2717]: E0819 08:09:48.532883 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.532905 kubelet[2717]: W0819 08:09:48.532893 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.532905 kubelet[2717]: E0819 08:09:48.532904 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.569654 kubelet[2717]: E0819 08:09:48.569613 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.569654 kubelet[2717]: W0819 08:09:48.569632 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.569654 kubelet[2717]: E0819 08:09:48.569650 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.569926 kubelet[2717]: E0819 08:09:48.569902 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.569926 kubelet[2717]: W0819 08:09:48.569914 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.569926 kubelet[2717]: E0819 08:09:48.569927 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.570182 kubelet[2717]: E0819 08:09:48.570158 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.570182 kubelet[2717]: W0819 08:09:48.570173 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.570282 kubelet[2717]: E0819 08:09:48.570193 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.570443 kubelet[2717]: E0819 08:09:48.570423 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.570443 kubelet[2717]: W0819 08:09:48.570435 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.570519 kubelet[2717]: E0819 08:09:48.570453 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.570654 kubelet[2717]: E0819 08:09:48.570634 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.570654 kubelet[2717]: W0819 08:09:48.570646 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.570760 kubelet[2717]: E0819 08:09:48.570664 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.570881 kubelet[2717]: E0819 08:09:48.570862 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.570881 kubelet[2717]: W0819 08:09:48.570877 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.570984 kubelet[2717]: E0819 08:09:48.570895 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.571134 kubelet[2717]: E0819 08:09:48.571115 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.571134 kubelet[2717]: W0819 08:09:48.571127 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.571219 kubelet[2717]: E0819 08:09:48.571168 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.571322 kubelet[2717]: E0819 08:09:48.571302 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.571322 kubelet[2717]: W0819 08:09:48.571314 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.571395 kubelet[2717]: E0819 08:09:48.571341 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.571541 kubelet[2717]: E0819 08:09:48.571521 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.571541 kubelet[2717]: W0819 08:09:48.571533 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.571625 kubelet[2717]: E0819 08:09:48.571551 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.571824 kubelet[2717]: E0819 08:09:48.571801 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.571824 kubelet[2717]: W0819 08:09:48.571820 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.571887 kubelet[2717]: E0819 08:09:48.571838 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.572053 kubelet[2717]: E0819 08:09:48.572032 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.572053 kubelet[2717]: W0819 08:09:48.572044 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.572143 kubelet[2717]: E0819 08:09:48.572059 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.572266 kubelet[2717]: E0819 08:09:48.572250 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.572266 kubelet[2717]: W0819 08:09:48.572261 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.572329 kubelet[2717]: E0819 08:09:48.572278 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.572588 kubelet[2717]: E0819 08:09:48.572567 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.572588 kubelet[2717]: W0819 08:09:48.572581 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.572673 kubelet[2717]: E0819 08:09:48.572600 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.572815 kubelet[2717]: E0819 08:09:48.572797 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.572815 kubelet[2717]: W0819 08:09:48.572810 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.572874 kubelet[2717]: E0819 08:09:48.572825 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.573027 kubelet[2717]: E0819 08:09:48.573008 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.573027 kubelet[2717]: W0819 08:09:48.573019 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.573115 kubelet[2717]: E0819 08:09:48.573035 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.573240 kubelet[2717]: E0819 08:09:48.573213 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.573240 kubelet[2717]: W0819 08:09:48.573225 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.573240 kubelet[2717]: E0819 08:09:48.573240 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.573542 kubelet[2717]: E0819 08:09:48.573523 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.573542 kubelet[2717]: W0819 08:09:48.573537 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.573632 kubelet[2717]: E0819 08:09:48.573556 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:48.573760 kubelet[2717]: E0819 08:09:48.573740 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:48.573760 kubelet[2717]: W0819 08:09:48.573752 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:48.573760 kubelet[2717]: E0819 08:09:48.573761 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.413547 kubelet[2717]: E0819 08:09:49.413486 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47j5b" podUID="d91662d8-9a6c-4379-a701-086161e4c200" Aug 19 08:09:49.538621 kubelet[2717]: E0819 08:09:49.538583 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.538621 kubelet[2717]: W0819 08:09:49.538607 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.538621 kubelet[2717]: E0819 08:09:49.538628 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.539203 kubelet[2717]: E0819 08:09:49.538868 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.539203 kubelet[2717]: W0819 08:09:49.538876 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.539203 kubelet[2717]: E0819 08:09:49.538885 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.539203 kubelet[2717]: E0819 08:09:49.539058 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.539203 kubelet[2717]: W0819 08:09:49.539065 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.539203 kubelet[2717]: E0819 08:09:49.539072 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.539382 kubelet[2717]: E0819 08:09:49.539217 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.539382 kubelet[2717]: W0819 08:09:49.539225 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.539382 kubelet[2717]: E0819 08:09:49.539232 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.539382 kubelet[2717]: E0819 08:09:49.539375 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.539382 kubelet[2717]: W0819 08:09:49.539382 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.539523 kubelet[2717]: E0819 08:09:49.539390 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.539523 kubelet[2717]: E0819 08:09:49.539518 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.539580 kubelet[2717]: W0819 08:09:49.539525 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.539580 kubelet[2717]: E0819 08:09:49.539532 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.539680 kubelet[2717]: E0819 08:09:49.539657 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.539680 kubelet[2717]: W0819 08:09:49.539676 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.539741 kubelet[2717]: E0819 08:09:49.539683 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.539831 kubelet[2717]: E0819 08:09:49.539818 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.539831 kubelet[2717]: W0819 08:09:49.539827 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.539893 kubelet[2717]: E0819 08:09:49.539834 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.540000 kubelet[2717]: E0819 08:09:49.539986 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.540000 kubelet[2717]: W0819 08:09:49.539995 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.540068 kubelet[2717]: E0819 08:09:49.540003 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.540149 kubelet[2717]: E0819 08:09:49.540136 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.540149 kubelet[2717]: W0819 08:09:49.540144 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.540208 kubelet[2717]: E0819 08:09:49.540151 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.540294 kubelet[2717]: E0819 08:09:49.540281 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.540294 kubelet[2717]: W0819 08:09:49.540290 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.540355 kubelet[2717]: E0819 08:09:49.540297 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.540443 kubelet[2717]: E0819 08:09:49.540428 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.540443 kubelet[2717]: W0819 08:09:49.540437 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.540502 kubelet[2717]: E0819 08:09:49.540444 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.540598 kubelet[2717]: E0819 08:09:49.540584 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.540598 kubelet[2717]: W0819 08:09:49.540593 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.540660 kubelet[2717]: E0819 08:09:49.540599 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.540766 kubelet[2717]: E0819 08:09:49.540751 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.540766 kubelet[2717]: W0819 08:09:49.540761 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.540824 kubelet[2717]: E0819 08:09:49.540768 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.540913 kubelet[2717]: E0819 08:09:49.540899 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.540913 kubelet[2717]: W0819 08:09:49.540908 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.540992 kubelet[2717]: E0819 08:09:49.540916 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.576955 kubelet[2717]: E0819 08:09:49.576895 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.576955 kubelet[2717]: W0819 08:09:49.576923 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.577190 kubelet[2717]: E0819 08:09:49.576968 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.577302 kubelet[2717]: E0819 08:09:49.577285 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.577302 kubelet[2717]: W0819 08:09:49.577298 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.577429 kubelet[2717]: E0819 08:09:49.577317 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.577655 kubelet[2717]: E0819 08:09:49.577639 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.577655 kubelet[2717]: W0819 08:09:49.577652 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.577750 kubelet[2717]: E0819 08:09:49.577678 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.577866 kubelet[2717]: E0819 08:09:49.577846 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.577908 kubelet[2717]: W0819 08:09:49.577859 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.577908 kubelet[2717]: E0819 08:09:49.577893 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.578164 kubelet[2717]: E0819 08:09:49.578130 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.578164 kubelet[2717]: W0819 08:09:49.578143 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.578164 kubelet[2717]: E0819 08:09:49.578157 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.578820 kubelet[2717]: E0819 08:09:49.578712 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.578820 kubelet[2717]: W0819 08:09:49.578727 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.578899 kubelet[2717]: E0819 08:09:49.578834 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.579206 kubelet[2717]: E0819 08:09:49.579161 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.579206 kubelet[2717]: W0819 08:09:49.579180 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.579286 kubelet[2717]: E0819 08:09:49.579257 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.579500 kubelet[2717]: E0819 08:09:49.579441 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.579500 kubelet[2717]: W0819 08:09:49.579457 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.579500 kubelet[2717]: E0819 08:09:49.579474 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.579794 kubelet[2717]: E0819 08:09:49.579745 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.579794 kubelet[2717]: W0819 08:09:49.579756 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.579794 kubelet[2717]: E0819 08:09:49.579787 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.580133 kubelet[2717]: E0819 08:09:49.580103 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.580133 kubelet[2717]: W0819 08:09:49.580122 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.580501 kubelet[2717]: E0819 08:09:49.580474 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.580827 kubelet[2717]: E0819 08:09:49.580811 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.581127 kubelet[2717]: W0819 08:09:49.580911 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.581564 kubelet[2717]: E0819 08:09:49.581345 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.582190 kubelet[2717]: E0819 08:09:49.582175 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.582317 kubelet[2717]: W0819 08:09:49.582281 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.582489 kubelet[2717]: E0819 08:09:49.582470 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.582813 kubelet[2717]: E0819 08:09:49.582780 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.582813 kubelet[2717]: W0819 08:09:49.582796 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.583027 kubelet[2717]: E0819 08:09:49.582992 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.583294 kubelet[2717]: E0819 08:09:49.583265 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.583294 kubelet[2717]: W0819 08:09:49.583278 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.583488 kubelet[2717]: E0819 08:09:49.583390 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.583673 kubelet[2717]: E0819 08:09:49.583650 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.583800 kubelet[2717]: W0819 08:09:49.583738 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.583800 kubelet[2717]: E0819 08:09:49.583755 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.584315 kubelet[2717]: E0819 08:09:49.584193 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.584315 kubelet[2717]: W0819 08:09:49.584236 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.584315 kubelet[2717]: E0819 08:09:49.584248 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.584715 kubelet[2717]: E0819 08:09:49.584699 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.585153 kubelet[2717]: W0819 08:09:49.584775 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.585153 kubelet[2717]: E0819 08:09:49.584791 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.585320 kubelet[2717]: E0819 08:09:49.585304 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:09:49.585515 kubelet[2717]: W0819 08:09:49.585372 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:09:49.585515 kubelet[2717]: E0819 08:09:49.585390 2717 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:09:49.647845 containerd[1576]: time="2025-08-19T08:09:49.647777594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:49.648475 containerd[1576]: time="2025-08-19T08:09:49.648449200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 19 08:09:49.649668 containerd[1576]: time="2025-08-19T08:09:49.649630759Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:49.651794 containerd[1576]: time="2025-08-19T08:09:49.651757166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:49.652495 containerd[1576]: time="2025-08-19T08:09:49.652456834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.535864997s" Aug 19 08:09:49.652533 containerd[1576]: time="2025-08-19T08:09:49.652494515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 19 08:09:49.654557 containerd[1576]: time="2025-08-19T08:09:49.654502399Z" level=info msg="CreateContainer within sandbox \"cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 19 08:09:49.664804 containerd[1576]: time="2025-08-19T08:09:49.663952200Z" level=info msg="Container 0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:49.672697 containerd[1576]: time="2025-08-19T08:09:49.672639875Z" level=info msg="CreateContainer within sandbox \"cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9\"" Aug 19 08:09:49.673316 containerd[1576]: time="2025-08-19T08:09:49.673257641Z" level=info msg="StartContainer for \"0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9\"" Aug 19 08:09:49.675162 containerd[1576]: time="2025-08-19T08:09:49.675023092Z" level=info msg="connecting to shim 0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9" address="unix:///run/containerd/s/04eee60702d81f642af8b994a4b8756d48edda21a5739b10b9f7eb8ea576aa59" protocol=ttrpc version=3 Aug 19 08:09:49.700228 systemd[1]: Started cri-containerd-0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9.scope - libcontainer container 0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9. Aug 19 08:09:49.751195 containerd[1576]: time="2025-08-19T08:09:49.751142582Z" level=info msg="StartContainer for \"0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9\" returns successfully" Aug 19 08:09:49.764664 systemd[1]: cri-containerd-0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9.scope: Deactivated successfully. Aug 19 08:09:49.767511 containerd[1576]: time="2025-08-19T08:09:49.767466077Z" level=info msg="received exit event container_id:\"0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9\" id:\"0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9\" pid:3412 exited_at:{seconds:1755590989 nanos:766989876}" Aug 19 08:09:49.767608 containerd[1576]: time="2025-08-19T08:09:49.767565332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9\" id:\"0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9\" pid:3412 exited_at:{seconds:1755590989 nanos:766989876}" Aug 19 08:09:49.791179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0beaf44005bb9957b968980b46d55326031f7d7113c23be4d5edef6e55eb7bd9-rootfs.mount: Deactivated successfully. Aug 19 08:09:50.492573 containerd[1576]: time="2025-08-19T08:09:50.492501877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 19 08:09:51.413786 kubelet[2717]: E0819 08:09:51.413711 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47j5b" podUID="d91662d8-9a6c-4379-a701-086161e4c200" Aug 19 08:09:53.414348 kubelet[2717]: E0819 08:09:53.414279 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47j5b" podUID="d91662d8-9a6c-4379-a701-086161e4c200" Aug 19 08:09:54.218589 containerd[1576]: time="2025-08-19T08:09:54.218532731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:54.219371 containerd[1576]: time="2025-08-19T08:09:54.219331526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 19 08:09:54.220609 containerd[1576]: time="2025-08-19T08:09:54.220553793Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:54.222550 containerd[1576]: time="2025-08-19T08:09:54.222491258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:54.223035 containerd[1576]: time="2025-08-19T08:09:54.223011151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.730460704s" Aug 19 08:09:54.223073 containerd[1576]: time="2025-08-19T08:09:54.223041027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 19 08:09:54.225061 containerd[1576]: time="2025-08-19T08:09:54.225005233Z" level=info msg="CreateContainer within sandbox \"cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 19 08:09:54.240524 containerd[1576]: time="2025-08-19T08:09:54.240460423Z" level=info msg="Container dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:54.250287 containerd[1576]: time="2025-08-19T08:09:54.250238240Z" level=info msg="CreateContainer within sandbox \"cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a\"" Aug 19 08:09:54.250881 containerd[1576]: time="2025-08-19T08:09:54.250837531Z" level=info msg="StartContainer for \"dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a\"" Aug 19 08:09:54.252316 containerd[1576]: time="2025-08-19T08:09:54.252289839Z" level=info msg="connecting to shim dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a" address="unix:///run/containerd/s/04eee60702d81f642af8b994a4b8756d48edda21a5739b10b9f7eb8ea576aa59" protocol=ttrpc version=3 Aug 19 08:09:54.278201 systemd[1]: Started cri-containerd-dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a.scope - libcontainer container dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a. Aug 19 08:09:54.325405 containerd[1576]: time="2025-08-19T08:09:54.325352928Z" level=info msg="StartContainer for \"dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a\" returns successfully" Aug 19 08:09:55.443137 kubelet[2717]: E0819 08:09:55.442827 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-47j5b" podUID="d91662d8-9a6c-4379-a701-086161e4c200" Aug 19 08:09:56.269713 systemd[1]: cri-containerd-dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a.scope: Deactivated successfully. Aug 19 08:09:56.270174 systemd[1]: cri-containerd-dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a.scope: Consumed 1.532s CPU time, 177.1M memory peak, 3.3M read from disk, 171.2M written to disk. Aug 19 08:09:56.272795 containerd[1576]: time="2025-08-19T08:09:56.272685737Z" level=info msg="received exit event container_id:\"dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a\" id:\"dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a\" pid:3472 exited_at:{seconds:1755590996 nanos:272175412}" Aug 19 08:09:56.273331 containerd[1576]: time="2025-08-19T08:09:56.272947687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a\" id:\"dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a\" pid:3472 exited_at:{seconds:1755590996 nanos:272175412}" Aug 19 08:09:56.299408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dca1dc3a2b77330c945e0fcbe6e2a7aa99ec1ed1925ad1f85cd7eb378f68870a-rootfs.mount: Deactivated successfully. Aug 19 08:09:56.314789 kubelet[2717]: I0819 08:09:56.314740 2717 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 19 08:09:56.495118 systemd[1]: Created slice kubepods-burstable-pod488bf5c5_4283_4168_b2e0_bcfa8eee4b04.slice - libcontainer container kubepods-burstable-pod488bf5c5_4283_4168_b2e0_bcfa8eee4b04.slice. Aug 19 08:09:56.497618 kubelet[2717]: I0819 08:09:56.496376 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2wm9\" (UniqueName: \"kubernetes.io/projected/488bf5c5-4283-4168-b2e0-bcfa8eee4b04-kube-api-access-r2wm9\") pod \"coredns-668d6bf9bc-zzdnh\" (UID: \"488bf5c5-4283-4168-b2e0-bcfa8eee4b04\") " pod="kube-system/coredns-668d6bf9bc-zzdnh" Aug 19 08:09:56.497618 kubelet[2717]: I0819 08:09:56.496432 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/488bf5c5-4283-4168-b2e0-bcfa8eee4b04-config-volume\") pod \"coredns-668d6bf9bc-zzdnh\" (UID: \"488bf5c5-4283-4168-b2e0-bcfa8eee4b04\") " pod="kube-system/coredns-668d6bf9bc-zzdnh" Aug 19 08:09:56.522684 systemd[1]: Created slice kubepods-besteffort-poda4bc0eb3_a05e_4eb4_9536_287f438cf3f4.slice - libcontainer container kubepods-besteffort-poda4bc0eb3_a05e_4eb4_9536_287f438cf3f4.slice. Aug 19 08:09:56.529565 systemd[1]: Created slice kubepods-besteffort-podddb42597_a313_4a0f_8076_ab24fe970535.slice - libcontainer container kubepods-besteffort-podddb42597_a313_4a0f_8076_ab24fe970535.slice. Aug 19 08:09:56.534283 systemd[1]: Created slice kubepods-besteffort-pod53748459_7a24_45ce_b558_f491069c5fcc.slice - libcontainer container kubepods-besteffort-pod53748459_7a24_45ce_b558_f491069c5fcc.slice. Aug 19 08:09:56.540132 systemd[1]: Created slice kubepods-burstable-pod229c0d08_aefa_4837_a92c_1bf8f9651595.slice - libcontainer container kubepods-burstable-pod229c0d08_aefa_4837_a92c_1bf8f9651595.slice. Aug 19 08:09:56.545875 systemd[1]: Created slice kubepods-besteffort-podaa8ce407_e0f9_465d_bd7b_80a38f51f364.slice - libcontainer container kubepods-besteffort-podaa8ce407_e0f9_465d_bd7b_80a38f51f364.slice. Aug 19 08:09:56.551497 systemd[1]: Created slice kubepods-besteffort-podbfa76174_249d_4f90_a6f0_cc4f856fe9e2.slice - libcontainer container kubepods-besteffort-podbfa76174_249d_4f90_a6f0_cc4f856fe9e2.slice. Aug 19 08:09:56.597691 kubelet[2717]: I0819 08:09:56.597574 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/229c0d08-aefa-4837-a92c-1bf8f9651595-config-volume\") pod \"coredns-668d6bf9bc-m2hsn\" (UID: \"229c0d08-aefa-4837-a92c-1bf8f9651595\") " pod="kube-system/coredns-668d6bf9bc-m2hsn" Aug 19 08:09:56.597691 kubelet[2717]: I0819 08:09:56.597640 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgk55\" (UniqueName: \"kubernetes.io/projected/229c0d08-aefa-4837-a92c-1bf8f9651595-kube-api-access-qgk55\") pod \"coredns-668d6bf9bc-m2hsn\" (UID: \"229c0d08-aefa-4837-a92c-1bf8f9651595\") " pod="kube-system/coredns-668d6bf9bc-m2hsn" Aug 19 08:09:56.597691 kubelet[2717]: I0819 08:09:56.597661 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-whisker-ca-bundle\") pod \"whisker-699f86855-cnn4v\" (UID: \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\") " pod="calico-system/whisker-699f86855-cnn4v" Aug 19 08:09:56.597691 kubelet[2717]: I0819 08:09:56.597711 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddb42597-a313-4a0f-8076-ab24fe970535-tigera-ca-bundle\") pod \"calico-kube-controllers-568b5cdb8-ksv62\" (UID: \"ddb42597-a313-4a0f-8076-ab24fe970535\") " pod="calico-system/calico-kube-controllers-568b5cdb8-ksv62" Aug 19 08:09:56.598023 kubelet[2717]: I0819 08:09:56.597734 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z59p\" (UniqueName: \"kubernetes.io/projected/aa8ce407-e0f9-465d-bd7b-80a38f51f364-kube-api-access-4z59p\") pod \"calico-apiserver-566fd69f-m7tq2\" (UID: \"aa8ce407-e0f9-465d-bd7b-80a38f51f364\") " pod="calico-apiserver/calico-apiserver-566fd69f-m7tq2" Aug 19 08:09:56.598023 kubelet[2717]: I0819 08:09:56.597759 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/53748459-7a24-45ce-b558-f491069c5fcc-goldmane-key-pair\") pod \"goldmane-768f4c5c69-nklwx\" (UID: \"53748459-7a24-45ce-b558-f491069c5fcc\") " pod="calico-system/goldmane-768f4c5c69-nklwx" Aug 19 08:09:56.598077 kubelet[2717]: I0819 08:09:56.597995 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53748459-7a24-45ce-b558-f491069c5fcc-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-nklwx\" (UID: \"53748459-7a24-45ce-b558-f491069c5fcc\") " pod="calico-system/goldmane-768f4c5c69-nklwx" Aug 19 08:09:56.598105 kubelet[2717]: I0819 08:09:56.598076 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-847d5\" (UniqueName: \"kubernetes.io/projected/ddb42597-a313-4a0f-8076-ab24fe970535-kube-api-access-847d5\") pod \"calico-kube-controllers-568b5cdb8-ksv62\" (UID: \"ddb42597-a313-4a0f-8076-ab24fe970535\") " pod="calico-system/calico-kube-controllers-568b5cdb8-ksv62" Aug 19 08:09:56.598105 kubelet[2717]: I0819 08:09:56.598100 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxbpr\" (UniqueName: \"kubernetes.io/projected/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-kube-api-access-wxbpr\") pod \"whisker-699f86855-cnn4v\" (UID: \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\") " pod="calico-system/whisker-699f86855-cnn4v" Aug 19 08:09:56.598165 kubelet[2717]: I0819 08:09:56.598138 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53748459-7a24-45ce-b558-f491069c5fcc-config\") pod \"goldmane-768f4c5c69-nklwx\" (UID: \"53748459-7a24-45ce-b558-f491069c5fcc\") " pod="calico-system/goldmane-768f4c5c69-nklwx" Aug 19 08:09:56.598233 kubelet[2717]: I0819 08:09:56.598195 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxf5\" (UniqueName: \"kubernetes.io/projected/53748459-7a24-45ce-b558-f491069c5fcc-kube-api-access-nkxf5\") pod \"goldmane-768f4c5c69-nklwx\" (UID: \"53748459-7a24-45ce-b558-f491069c5fcc\") " pod="calico-system/goldmane-768f4c5c69-nklwx" Aug 19 08:09:56.598233 kubelet[2717]: I0819 08:09:56.598227 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bfa76174-249d-4f90-a6f0-cc4f856fe9e2-calico-apiserver-certs\") pod \"calico-apiserver-566fd69f-bjbqv\" (UID: \"bfa76174-249d-4f90-a6f0-cc4f856fe9e2\") " pod="calico-apiserver/calico-apiserver-566fd69f-bjbqv" Aug 19 08:09:56.598336 kubelet[2717]: I0819 08:09:56.598310 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5s6\" (UniqueName: \"kubernetes.io/projected/bfa76174-249d-4f90-a6f0-cc4f856fe9e2-kube-api-access-zl5s6\") pod \"calico-apiserver-566fd69f-bjbqv\" (UID: \"bfa76174-249d-4f90-a6f0-cc4f856fe9e2\") " pod="calico-apiserver/calico-apiserver-566fd69f-bjbqv" Aug 19 08:09:56.598460 kubelet[2717]: I0819 08:09:56.598419 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-whisker-backend-key-pair\") pod \"whisker-699f86855-cnn4v\" (UID: \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\") " pod="calico-system/whisker-699f86855-cnn4v" Aug 19 08:09:56.598525 kubelet[2717]: I0819 08:09:56.598472 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa8ce407-e0f9-465d-bd7b-80a38f51f364-calico-apiserver-certs\") pod \"calico-apiserver-566fd69f-m7tq2\" (UID: \"aa8ce407-e0f9-465d-bd7b-80a38f51f364\") " pod="calico-apiserver/calico-apiserver-566fd69f-m7tq2" Aug 19 08:09:56.926190 containerd[1576]: time="2025-08-19T08:09:56.926143608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m2hsn,Uid:229c0d08-aefa-4837-a92c-1bf8f9651595,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:56.926372 containerd[1576]: time="2025-08-19T08:09:56.926145071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-566fd69f-m7tq2,Uid:aa8ce407-e0f9-465d-bd7b-80a38f51f364,Namespace:calico-apiserver,Attempt:0,}" Aug 19 08:09:56.926478 containerd[1576]: time="2025-08-19T08:09:56.926149098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-566fd69f-bjbqv,Uid:bfa76174-249d-4f90-a6f0-cc4f856fe9e2,Namespace:calico-apiserver,Attempt:0,}" Aug 19 08:09:57.034976 containerd[1576]: time="2025-08-19T08:09:57.034531670Z" level=error msg="Failed to destroy network for sandbox \"5bebcfd1c80165291ac04a1dc3222b8aa864a40f4322dacd565d8ff38a0840a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.044547 containerd[1576]: time="2025-08-19T08:09:57.044472979Z" level=error msg="Failed to destroy network for sandbox \"44f391098058aba97e7703547eafc3463d3ed270757dded61a4df3c85de593da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.045523 containerd[1576]: time="2025-08-19T08:09:57.045462821Z" level=error msg="Failed to destroy network for sandbox \"2b54d3b1c7bd27430cffc8386896ddaaf2d799cfc3c75ceb5da62655fe4ade08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.048995 containerd[1576]: time="2025-08-19T08:09:57.048899715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-566fd69f-bjbqv,Uid:bfa76174-249d-4f90-a6f0-cc4f856fe9e2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f391098058aba97e7703547eafc3463d3ed270757dded61a4df3c85de593da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.049165 containerd[1576]: time="2025-08-19T08:09:57.048902440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m2hsn,Uid:229c0d08-aefa-4837-a92c-1bf8f9651595,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54d3b1c7bd27430cffc8386896ddaaf2d799cfc3c75ceb5da62655fe4ade08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.049211 containerd[1576]: time="2025-08-19T08:09:57.048914583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-566fd69f-m7tq2,Uid:aa8ce407-e0f9-465d-bd7b-80a38f51f364,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bebcfd1c80165291ac04a1dc3222b8aa864a40f4322dacd565d8ff38a0840a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.056473 kubelet[2717]: E0819 08:09:57.056396 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bebcfd1c80165291ac04a1dc3222b8aa864a40f4322dacd565d8ff38a0840a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.056603 kubelet[2717]: E0819 08:09:57.056395 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54d3b1c7bd27430cffc8386896ddaaf2d799cfc3c75ceb5da62655fe4ade08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.056661 kubelet[2717]: E0819 08:09:57.056626 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54d3b1c7bd27430cffc8386896ddaaf2d799cfc3c75ceb5da62655fe4ade08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m2hsn" Aug 19 08:09:57.056661 kubelet[2717]: E0819 08:09:57.056397 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f391098058aba97e7703547eafc3463d3ed270757dded61a4df3c85de593da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.056725 kubelet[2717]: E0819 08:09:57.056666 2717 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54d3b1c7bd27430cffc8386896ddaaf2d799cfc3c75ceb5da62655fe4ade08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m2hsn" Aug 19 08:09:57.056725 kubelet[2717]: E0819 08:09:57.056675 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f391098058aba97e7703547eafc3463d3ed270757dded61a4df3c85de593da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-566fd69f-bjbqv" Aug 19 08:09:57.056725 kubelet[2717]: E0819 08:09:57.056699 2717 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f391098058aba97e7703547eafc3463d3ed270757dded61a4df3c85de593da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-566fd69f-bjbqv" Aug 19 08:09:57.056807 kubelet[2717]: E0819 08:09:57.056542 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bebcfd1c80165291ac04a1dc3222b8aa864a40f4322dacd565d8ff38a0840a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-566fd69f-m7tq2" Aug 19 08:09:57.056807 kubelet[2717]: E0819 08:09:57.056735 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m2hsn_kube-system(229c0d08-aefa-4837-a92c-1bf8f9651595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m2hsn_kube-system(229c0d08-aefa-4837-a92c-1bf8f9651595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b54d3b1c7bd27430cffc8386896ddaaf2d799cfc3c75ceb5da62655fe4ade08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m2hsn" podUID="229c0d08-aefa-4837-a92c-1bf8f9651595" Aug 19 08:09:57.056807 kubelet[2717]: E0819 08:09:57.056756 2717 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bebcfd1c80165291ac04a1dc3222b8aa864a40f4322dacd565d8ff38a0840a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-566fd69f-m7tq2" Aug 19 08:09:57.056923 kubelet[2717]: E0819 08:09:57.056753 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-566fd69f-bjbqv_calico-apiserver(bfa76174-249d-4f90-a6f0-cc4f856fe9e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-566fd69f-bjbqv_calico-apiserver(bfa76174-249d-4f90-a6f0-cc4f856fe9e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44f391098058aba97e7703547eafc3463d3ed270757dded61a4df3c85de593da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-566fd69f-bjbqv" podUID="bfa76174-249d-4f90-a6f0-cc4f856fe9e2" Aug 19 08:09:57.056923 kubelet[2717]: E0819 08:09:57.056819 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-566fd69f-m7tq2_calico-apiserver(aa8ce407-e0f9-465d-bd7b-80a38f51f364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-566fd69f-m7tq2_calico-apiserver(aa8ce407-e0f9-465d-bd7b-80a38f51f364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bebcfd1c80165291ac04a1dc3222b8aa864a40f4322dacd565d8ff38a0840a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-566fd69f-m7tq2" podUID="aa8ce407-e0f9-465d-bd7b-80a38f51f364" Aug 19 08:09:57.115810 containerd[1576]: time="2025-08-19T08:09:57.115740070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zzdnh,Uid:488bf5c5-4283-4168-b2e0-bcfa8eee4b04,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:57.126664 containerd[1576]: time="2025-08-19T08:09:57.126618423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-699f86855-cnn4v,Uid:a4bc0eb3-a05e-4eb4-9536-287f438cf3f4,Namespace:calico-system,Attempt:0,}" Aug 19 08:09:57.134056 containerd[1576]: time="2025-08-19T08:09:57.133925922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-568b5cdb8-ksv62,Uid:ddb42597-a313-4a0f-8076-ab24fe970535,Namespace:calico-system,Attempt:0,}" Aug 19 08:09:57.138054 containerd[1576]: time="2025-08-19T08:09:57.137996410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nklwx,Uid:53748459-7a24-45ce-b558-f491069c5fcc,Namespace:calico-system,Attempt:0,}" Aug 19 08:09:57.182741 containerd[1576]: time="2025-08-19T08:09:57.182148036Z" level=error msg="Failed to destroy network for sandbox \"b72fcea28b8b36bfce001980d9c6c63fee969e8784d7f58c55af47881d606ed7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.184193 containerd[1576]: time="2025-08-19T08:09:57.184149422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zzdnh,Uid:488bf5c5-4283-4168-b2e0-bcfa8eee4b04,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72fcea28b8b36bfce001980d9c6c63fee969e8784d7f58c55af47881d606ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.184494 kubelet[2717]: E0819 08:09:57.184442 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72fcea28b8b36bfce001980d9c6c63fee969e8784d7f58c55af47881d606ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.184653 kubelet[2717]: E0819 08:09:57.184526 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72fcea28b8b36bfce001980d9c6c63fee969e8784d7f58c55af47881d606ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zzdnh" Aug 19 08:09:57.184653 kubelet[2717]: E0819 08:09:57.184552 2717 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72fcea28b8b36bfce001980d9c6c63fee969e8784d7f58c55af47881d606ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zzdnh" Aug 19 08:09:57.184653 kubelet[2717]: E0819 08:09:57.184594 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zzdnh_kube-system(488bf5c5-4283-4168-b2e0-bcfa8eee4b04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zzdnh_kube-system(488bf5c5-4283-4168-b2e0-bcfa8eee4b04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b72fcea28b8b36bfce001980d9c6c63fee969e8784d7f58c55af47881d606ed7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zzdnh" podUID="488bf5c5-4283-4168-b2e0-bcfa8eee4b04" Aug 19 08:09:57.190236 containerd[1576]: time="2025-08-19T08:09:57.190185861Z" level=error msg="Failed to destroy network for sandbox \"9f6cb7bb09414a96e44f074e2dd29181932872aa1106aa9e2a74a0b3700c0c34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.191601 containerd[1576]: time="2025-08-19T08:09:57.191566836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-699f86855-cnn4v,Uid:a4bc0eb3-a05e-4eb4-9536-287f438cf3f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f6cb7bb09414a96e44f074e2dd29181932872aa1106aa9e2a74a0b3700c0c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.191855 kubelet[2717]: E0819 08:09:57.191799 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f6cb7bb09414a96e44f074e2dd29181932872aa1106aa9e2a74a0b3700c0c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.191907 kubelet[2717]: E0819 08:09:57.191876 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f6cb7bb09414a96e44f074e2dd29181932872aa1106aa9e2a74a0b3700c0c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-699f86855-cnn4v" Aug 19 08:09:57.191907 kubelet[2717]: E0819 08:09:57.191899 2717 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f6cb7bb09414a96e44f074e2dd29181932872aa1106aa9e2a74a0b3700c0c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-699f86855-cnn4v" Aug 19 08:09:57.192052 kubelet[2717]: E0819 08:09:57.191973 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-699f86855-cnn4v_calico-system(a4bc0eb3-a05e-4eb4-9536-287f438cf3f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-699f86855-cnn4v_calico-system(a4bc0eb3-a05e-4eb4-9536-287f438cf3f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f6cb7bb09414a96e44f074e2dd29181932872aa1106aa9e2a74a0b3700c0c34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-699f86855-cnn4v" podUID="a4bc0eb3-a05e-4eb4-9536-287f438cf3f4" Aug 19 08:09:57.200054 containerd[1576]: time="2025-08-19T08:09:57.199993300Z" level=error msg="Failed to destroy network for sandbox \"1529b7b1ce7cf7189a001225000056ed4cab7d9b8d3411e41a9af672ef75343b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.201236 containerd[1576]: time="2025-08-19T08:09:57.201182876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-568b5cdb8-ksv62,Uid:ddb42597-a313-4a0f-8076-ab24fe970535,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1529b7b1ce7cf7189a001225000056ed4cab7d9b8d3411e41a9af672ef75343b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.201511 kubelet[2717]: E0819 08:09:57.201455 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1529b7b1ce7cf7189a001225000056ed4cab7d9b8d3411e41a9af672ef75343b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.201569 kubelet[2717]: E0819 08:09:57.201542 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1529b7b1ce7cf7189a001225000056ed4cab7d9b8d3411e41a9af672ef75343b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-568b5cdb8-ksv62" Aug 19 08:09:57.201607 kubelet[2717]: E0819 08:09:57.201566 2717 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1529b7b1ce7cf7189a001225000056ed4cab7d9b8d3411e41a9af672ef75343b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-568b5cdb8-ksv62" Aug 19 08:09:57.201635 kubelet[2717]: E0819 08:09:57.201613 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-568b5cdb8-ksv62_calico-system(ddb42597-a313-4a0f-8076-ab24fe970535)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-568b5cdb8-ksv62_calico-system(ddb42597-a313-4a0f-8076-ab24fe970535)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1529b7b1ce7cf7189a001225000056ed4cab7d9b8d3411e41a9af672ef75343b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-568b5cdb8-ksv62" podUID="ddb42597-a313-4a0f-8076-ab24fe970535" Aug 19 08:09:57.209467 containerd[1576]: time="2025-08-19T08:09:57.209394256Z" level=error msg="Failed to destroy network for sandbox \"aff081750c2852565cf0bd6b93692e1c4ddb78d988796cc6905c4b112f0d4c8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.210903 containerd[1576]: time="2025-08-19T08:09:57.210846585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nklwx,Uid:53748459-7a24-45ce-b558-f491069c5fcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aff081750c2852565cf0bd6b93692e1c4ddb78d988796cc6905c4b112f0d4c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.211123 kubelet[2717]: E0819 08:09:57.211074 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aff081750c2852565cf0bd6b93692e1c4ddb78d988796cc6905c4b112f0d4c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.211171 kubelet[2717]: E0819 08:09:57.211138 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aff081750c2852565cf0bd6b93692e1c4ddb78d988796cc6905c4b112f0d4c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nklwx" Aug 19 08:09:57.211194 kubelet[2717]: E0819 08:09:57.211165 2717 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aff081750c2852565cf0bd6b93692e1c4ddb78d988796cc6905c4b112f0d4c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nklwx" Aug 19 08:09:57.211255 kubelet[2717]: E0819 08:09:57.211221 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-nklwx_calico-system(53748459-7a24-45ce-b558-f491069c5fcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-nklwx_calico-system(53748459-7a24-45ce-b558-f491069c5fcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aff081750c2852565cf0bd6b93692e1c4ddb78d988796cc6905c4b112f0d4c8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nklwx" podUID="53748459-7a24-45ce-b558-f491069c5fcc" Aug 19 08:09:57.419841 systemd[1]: Created slice kubepods-besteffort-podd91662d8_9a6c_4379_a701_086161e4c200.slice - libcontainer container kubepods-besteffort-podd91662d8_9a6c_4379_a701_086161e4c200.slice. Aug 19 08:09:57.422403 containerd[1576]: time="2025-08-19T08:09:57.422359976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-47j5b,Uid:d91662d8-9a6c-4379-a701-086161e4c200,Namespace:calico-system,Attempt:0,}" Aug 19 08:09:57.506293 containerd[1576]: time="2025-08-19T08:09:57.506152021Z" level=error msg="Failed to destroy network for sandbox \"532093f2df86ea221db4aacc1863a73b787595ab1a27fb8512b38bc44b245d67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.508432 containerd[1576]: time="2025-08-19T08:09:57.508368801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-47j5b,Uid:d91662d8-9a6c-4379-a701-086161e4c200,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"532093f2df86ea221db4aacc1863a73b787595ab1a27fb8512b38bc44b245d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.508691 kubelet[2717]: E0819 08:09:57.508653 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532093f2df86ea221db4aacc1863a73b787595ab1a27fb8512b38bc44b245d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:09:57.509039 kubelet[2717]: E0819 08:09:57.508723 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532093f2df86ea221db4aacc1863a73b787595ab1a27fb8512b38bc44b245d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-47j5b" Aug 19 08:09:57.509039 kubelet[2717]: E0819 08:09:57.508746 2717 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532093f2df86ea221db4aacc1863a73b787595ab1a27fb8512b38bc44b245d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-47j5b" Aug 19 08:09:57.509039 kubelet[2717]: E0819 08:09:57.508805 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-47j5b_calico-system(d91662d8-9a6c-4379-a701-086161e4c200)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-47j5b_calico-system(d91662d8-9a6c-4379-a701-086161e4c200)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"532093f2df86ea221db4aacc1863a73b787595ab1a27fb8512b38bc44b245d67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-47j5b" podUID="d91662d8-9a6c-4379-a701-086161e4c200" Aug 19 08:09:57.508860 systemd[1]: run-netns-cni\x2d579a8372\x2d6bd8\x2dd03e\x2d5c3f\x2d4e61892c9cc5.mount: Deactivated successfully. Aug 19 08:09:57.524704 containerd[1576]: time="2025-08-19T08:09:57.524556342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 19 08:10:05.214997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623401286.mount: Deactivated successfully. Aug 19 08:10:05.627894 containerd[1576]: time="2025-08-19T08:10:05.627714853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:05.629117 containerd[1576]: time="2025-08-19T08:10:05.629084379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 19 08:10:05.630811 containerd[1576]: time="2025-08-19T08:10:05.630767281Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:05.632897 containerd[1576]: time="2025-08-19T08:10:05.632859470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:05.633392 containerd[1576]: time="2025-08-19T08:10:05.633347163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.108751156s" Aug 19 08:10:05.633448 containerd[1576]: time="2025-08-19T08:10:05.633393189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 19 08:10:05.641718 containerd[1576]: time="2025-08-19T08:10:05.641677686Z" level=info msg="CreateContainer within sandbox \"cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 19 08:10:05.663640 containerd[1576]: time="2025-08-19T08:10:05.663587762Z" level=info msg="Container 84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:05.674343 containerd[1576]: time="2025-08-19T08:10:05.674289127Z" level=info msg="CreateContainer within sandbox \"cc806d2ede8a140ca63feea732737e1dd51ec3b6a7b266c8ee516081eb8e715d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a\"" Aug 19 08:10:05.674814 containerd[1576]: time="2025-08-19T08:10:05.674791998Z" level=info msg="StartContainer for \"84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a\"" Aug 19 08:10:05.676270 containerd[1576]: time="2025-08-19T08:10:05.676244939Z" level=info msg="connecting to shim 84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a" address="unix:///run/containerd/s/04eee60702d81f642af8b994a4b8756d48edda21a5739b10b9f7eb8ea576aa59" protocol=ttrpc version=3 Aug 19 08:10:05.704095 systemd[1]: Started cri-containerd-84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a.scope - libcontainer container 84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a. Aug 19 08:10:05.751340 containerd[1576]: time="2025-08-19T08:10:05.751278395Z" level=info msg="StartContainer for \"84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a\" returns successfully" Aug 19 08:10:05.833038 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 19 08:10:05.833795 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 19 08:10:06.060486 kubelet[2717]: I0819 08:10:06.060408 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxbpr\" (UniqueName: \"kubernetes.io/projected/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-kube-api-access-wxbpr\") pod \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\" (UID: \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\") " Aug 19 08:10:06.060486 kubelet[2717]: I0819 08:10:06.060499 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-whisker-ca-bundle\") pod \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\" (UID: \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\") " Aug 19 08:10:06.061120 kubelet[2717]: I0819 08:10:06.060544 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-whisker-backend-key-pair\") pod \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\" (UID: \"a4bc0eb3-a05e-4eb4-9536-287f438cf3f4\") " Aug 19 08:10:06.063589 kubelet[2717]: I0819 08:10:06.063361 2717 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a4bc0eb3-a05e-4eb4-9536-287f438cf3f4" (UID: "a4bc0eb3-a05e-4eb4-9536-287f438cf3f4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 08:10:06.068200 kubelet[2717]: I0819 08:10:06.068106 2717 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-kube-api-access-wxbpr" (OuterVolumeSpecName: "kube-api-access-wxbpr") pod "a4bc0eb3-a05e-4eb4-9536-287f438cf3f4" (UID: "a4bc0eb3-a05e-4eb4-9536-287f438cf3f4"). InnerVolumeSpecName "kube-api-access-wxbpr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:10:06.068364 kubelet[2717]: I0819 08:10:06.068319 2717 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a4bc0eb3-a05e-4eb4-9536-287f438cf3f4" (UID: "a4bc0eb3-a05e-4eb4-9536-287f438cf3f4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 19 08:10:06.161807 kubelet[2717]: I0819 08:10:06.161744 2717 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wxbpr\" (UniqueName: \"kubernetes.io/projected/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-kube-api-access-wxbpr\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:06.161807 kubelet[2717]: I0819 08:10:06.161791 2717 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:06.161807 kubelet[2717]: I0819 08:10:06.161801 2717 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:06.215461 systemd[1]: var-lib-kubelet-pods-a4bc0eb3\x2da05e\x2d4eb4\x2d9536\x2d287f438cf3f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwxbpr.mount: Deactivated successfully. Aug 19 08:10:06.215586 systemd[1]: var-lib-kubelet-pods-a4bc0eb3\x2da05e\x2d4eb4\x2d9536\x2d287f438cf3f4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 19 08:10:06.576526 systemd[1]: Removed slice kubepods-besteffort-poda4bc0eb3_a05e_4eb4_9536_287f438cf3f4.slice - libcontainer container kubepods-besteffort-poda4bc0eb3_a05e_4eb4_9536_287f438cf3f4.slice. Aug 19 08:10:06.590477 kubelet[2717]: I0819 08:10:06.590351 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v5d6m" podStartSLOduration=1.9718053100000001 podStartE2EDuration="23.590293741s" podCreationTimestamp="2025-08-19 08:09:43 +0000 UTC" firstStartedPulling="2025-08-19 08:09:44.015715015 +0000 UTC m=+18.685394640" lastFinishedPulling="2025-08-19 08:10:05.634203446 +0000 UTC m=+40.303883071" observedRunningTime="2025-08-19 08:10:06.58944389 +0000 UTC m=+41.259123515" watchObservedRunningTime="2025-08-19 08:10:06.590293741 +0000 UTC m=+41.259973366" Aug 19 08:10:06.638789 systemd[1]: Created slice kubepods-besteffort-pod43139b04_4c6c_4e51_bd9f_2183c740c4e8.slice - libcontainer container kubepods-besteffort-pod43139b04_4c6c_4e51_bd9f_2183c740c4e8.slice. Aug 19 08:10:06.665385 kubelet[2717]: I0819 08:10:06.665308 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43139b04-4c6c-4e51-bd9f-2183c740c4e8-whisker-ca-bundle\") pod \"whisker-779bcc4777-462r6\" (UID: \"43139b04-4c6c-4e51-bd9f-2183c740c4e8\") " pod="calico-system/whisker-779bcc4777-462r6" Aug 19 08:10:06.665385 kubelet[2717]: I0819 08:10:06.665380 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtn94\" (UniqueName: \"kubernetes.io/projected/43139b04-4c6c-4e51-bd9f-2183c740c4e8-kube-api-access-mtn94\") pod \"whisker-779bcc4777-462r6\" (UID: \"43139b04-4c6c-4e51-bd9f-2183c740c4e8\") " pod="calico-system/whisker-779bcc4777-462r6" Aug 19 08:10:06.665603 kubelet[2717]: I0819 08:10:06.665449 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43139b04-4c6c-4e51-bd9f-2183c740c4e8-whisker-backend-key-pair\") pod \"whisker-779bcc4777-462r6\" (UID: \"43139b04-4c6c-4e51-bd9f-2183c740c4e8\") " pod="calico-system/whisker-779bcc4777-462r6" Aug 19 08:10:06.943219 containerd[1576]: time="2025-08-19T08:10:06.943150579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-779bcc4777-462r6,Uid:43139b04-4c6c-4e51-bd9f-2183c740c4e8,Namespace:calico-system,Attempt:0,}" Aug 19 08:10:07.150962 systemd-networkd[1479]: caliad59457ace1: Link UP Aug 19 08:10:07.151224 systemd-networkd[1479]: caliad59457ace1: Gained carrier Aug 19 08:10:07.185252 containerd[1576]: 2025-08-19 08:10:06.997 [INFO][3854] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 19 08:10:07.185252 containerd[1576]: 2025-08-19 08:10:07.018 [INFO][3854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--779bcc4777--462r6-eth0 whisker-779bcc4777- calico-system 43139b04-4c6c-4e51-bd9f-2183c740c4e8 871 0 2025-08-19 08:10:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:779bcc4777 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-779bcc4777-462r6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliad59457ace1 [] [] }} ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Namespace="calico-system" Pod="whisker-779bcc4777-462r6" WorkloadEndpoint="localhost-k8s-whisker--779bcc4777--462r6-" Aug 19 08:10:07.185252 containerd[1576]: 2025-08-19 08:10:07.019 [INFO][3854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Namespace="calico-system" Pod="whisker-779bcc4777-462r6" WorkloadEndpoint="localhost-k8s-whisker--779bcc4777--462r6-eth0" Aug 19 08:10:07.185252 containerd[1576]: 2025-08-19 08:10:07.086 [INFO][3868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" HandleID="k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Workload="localhost-k8s-whisker--779bcc4777--462r6-eth0" Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.087 [INFO][3868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" HandleID="k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Workload="localhost-k8s-whisker--779bcc4777--462r6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050c860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-779bcc4777-462r6", "timestamp":"2025-08-19 08:10:07.086035804 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.087 [INFO][3868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.087 [INFO][3868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.087 [INFO][3868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.095 [INFO][3868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" host="localhost" Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.100 [INFO][3868] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.107 [INFO][3868] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.111 [INFO][3868] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.114 [INFO][3868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:07.187152 containerd[1576]: 2025-08-19 08:10:07.114 [INFO][3868] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" host="localhost" Aug 19 08:10:07.187488 containerd[1576]: 2025-08-19 08:10:07.117 [INFO][3868] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d Aug 19 08:10:07.187488 containerd[1576]: 2025-08-19 08:10:07.125 [INFO][3868] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" host="localhost" Aug 19 08:10:07.187488 containerd[1576]: 2025-08-19 08:10:07.132 [INFO][3868] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" host="localhost" Aug 19 08:10:07.187488 containerd[1576]: 2025-08-19 08:10:07.133 [INFO][3868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" host="localhost" Aug 19 08:10:07.187488 containerd[1576]: 2025-08-19 08:10:07.133 [INFO][3868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:10:07.187488 containerd[1576]: 2025-08-19 08:10:07.133 [INFO][3868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" HandleID="k8s-pod-network.e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Workload="localhost-k8s-whisker--779bcc4777--462r6-eth0" Aug 19 08:10:07.187668 containerd[1576]: 2025-08-19 08:10:07.139 [INFO][3854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Namespace="calico-system" Pod="whisker-779bcc4777-462r6" WorkloadEndpoint="localhost-k8s-whisker--779bcc4777--462r6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--779bcc4777--462r6-eth0", GenerateName:"whisker-779bcc4777-", Namespace:"calico-system", SelfLink:"", UID:"43139b04-4c6c-4e51-bd9f-2183c740c4e8", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 10, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"779bcc4777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-779bcc4777-462r6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad59457ace1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:07.187668 containerd[1576]: 2025-08-19 08:10:07.139 [INFO][3854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Namespace="calico-system" Pod="whisker-779bcc4777-462r6" WorkloadEndpoint="localhost-k8s-whisker--779bcc4777--462r6-eth0" Aug 19 08:10:07.187764 containerd[1576]: 2025-08-19 08:10:07.139 [INFO][3854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad59457ace1 ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Namespace="calico-system" Pod="whisker-779bcc4777-462r6" WorkloadEndpoint="localhost-k8s-whisker--779bcc4777--462r6-eth0" Aug 19 08:10:07.187764 containerd[1576]: 2025-08-19 08:10:07.152 [INFO][3854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Namespace="calico-system" Pod="whisker-779bcc4777-462r6" WorkloadEndpoint="localhost-k8s-whisker--779bcc4777--462r6-eth0" Aug 19 08:10:07.187824 containerd[1576]: 2025-08-19 08:10:07.154 [INFO][3854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Namespace="calico-system" Pod="whisker-779bcc4777-462r6" WorkloadEndpoint="localhost-k8s-whisker--779bcc4777--462r6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--779bcc4777--462r6-eth0", GenerateName:"whisker-779bcc4777-", Namespace:"calico-system", SelfLink:"", UID:"43139b04-4c6c-4e51-bd9f-2183c740c4e8", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 10, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"779bcc4777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d", Pod:"whisker-779bcc4777-462r6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad59457ace1", MAC:"52:f2:16:e9:e2:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:07.187897 containerd[1576]: 2025-08-19 08:10:07.168 [INFO][3854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" Namespace="calico-system" Pod="whisker-779bcc4777-462r6" WorkloadEndpoint="localhost-k8s-whisker--779bcc4777--462r6-eth0" Aug 19 08:10:07.414569 containerd[1576]: time="2025-08-19T08:10:07.414301543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nklwx,Uid:53748459-7a24-45ce-b558-f491069c5fcc,Namespace:calico-system,Attempt:0,}" Aug 19 08:10:07.416548 kubelet[2717]: I0819 08:10:07.416501 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4bc0eb3-a05e-4eb4-9536-287f438cf3f4" path="/var/lib/kubelet/pods/a4bc0eb3-a05e-4eb4-9536-287f438cf3f4/volumes" Aug 19 08:10:07.700017 systemd-networkd[1479]: vxlan.calico: Link UP Aug 19 08:10:07.700031 systemd-networkd[1479]: vxlan.calico: Gained carrier Aug 19 08:10:07.849167 containerd[1576]: time="2025-08-19T08:10:07.849091822Z" level=info msg="connecting to shim e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d" address="unix:///run/containerd/s/4c8a9c97ec2c493b428553d93c42b117fd41a0c05b26b258565b8f2ed45409ff" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:07.877118 systemd[1]: Started cri-containerd-e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d.scope - libcontainer container e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d. Aug 19 08:10:07.893158 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:10:08.376236 containerd[1576]: time="2025-08-19T08:10:08.376183142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-779bcc4777-462r6,Uid:43139b04-4c6c-4e51-bd9f-2183c740c4e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d\"" Aug 19 08:10:08.378387 containerd[1576]: time="2025-08-19T08:10:08.378124237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 19 08:10:08.382305 systemd-networkd[1479]: cali9b272bf6264: Link UP Aug 19 08:10:08.382885 systemd-networkd[1479]: cali9b272bf6264: Gained carrier Aug 19 08:10:08.400680 containerd[1576]: 2025-08-19 08:10:07.697 [INFO][4026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--nklwx-eth0 goldmane-768f4c5c69- calico-system 53748459-7a24-45ce-b558-f491069c5fcc 803 0 2025-08-19 08:09:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-nklwx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9b272bf6264 [] [] }} ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-nklwx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nklwx-" Aug 19 08:10:08.400680 containerd[1576]: 2025-08-19 08:10:07.700 [INFO][4026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-nklwx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" Aug 19 08:10:08.400680 containerd[1576]: 2025-08-19 08:10:07.745 [INFO][4051] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" HandleID="k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Workload="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.745 [INFO][4051] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" HandleID="k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Workload="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001355b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-nklwx", "timestamp":"2025-08-19 08:10:07.745082634 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.745 [INFO][4051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.745 [INFO][4051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.745 [INFO][4051] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.758 [INFO][4051] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" host="localhost" Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.773 [INFO][4051] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.779 [INFO][4051] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.781 [INFO][4051] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.784 [INFO][4051] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:08.400921 containerd[1576]: 2025-08-19 08:10:07.784 [INFO][4051] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" host="localhost" Aug 19 08:10:08.401311 containerd[1576]: 2025-08-19 08:10:07.792 [INFO][4051] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f Aug 19 08:10:08.401311 containerd[1576]: 2025-08-19 08:10:07.882 [INFO][4051] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" host="localhost" Aug 19 08:10:08.401311 containerd[1576]: 2025-08-19 08:10:08.375 [INFO][4051] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" host="localhost" Aug 19 08:10:08.401311 containerd[1576]: 2025-08-19 08:10:08.375 [INFO][4051] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" host="localhost" Aug 19 08:10:08.401311 containerd[1576]: 2025-08-19 08:10:08.375 [INFO][4051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:10:08.401311 containerd[1576]: 2025-08-19 08:10:08.375 [INFO][4051] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" HandleID="k8s-pod-network.4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Workload="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" Aug 19 08:10:08.401447 containerd[1576]: 2025-08-19 08:10:08.378 [INFO][4026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-nklwx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nklwx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"53748459-7a24-45ce-b558-f491069c5fcc", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-nklwx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b272bf6264", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:08.401447 containerd[1576]: 2025-08-19 08:10:08.379 [INFO][4026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-nklwx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" Aug 19 08:10:08.401530 containerd[1576]: 2025-08-19 08:10:08.379 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b272bf6264 ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-nklwx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" Aug 19 08:10:08.401530 containerd[1576]: 2025-08-19 08:10:08.383 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-nklwx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" Aug 19 08:10:08.401574 containerd[1576]: 2025-08-19 08:10:08.385 [INFO][4026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-nklwx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nklwx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"53748459-7a24-45ce-b558-f491069c5fcc", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f", Pod:"goldmane-768f4c5c69-nklwx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b272bf6264", MAC:"42:5f:e7:7a:4a:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:08.401623 containerd[1576]: 2025-08-19 08:10:08.396 [INFO][4026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-nklwx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nklwx-eth0" Aug 19 08:10:08.413964 containerd[1576]: time="2025-08-19T08:10:08.413839722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-568b5cdb8-ksv62,Uid:ddb42597-a313-4a0f-8076-ab24fe970535,Namespace:calico-system,Attempt:0,}" Aug 19 08:10:08.433585 containerd[1576]: time="2025-08-19T08:10:08.433508377Z" level=info msg="connecting to shim 4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f" address="unix:///run/containerd/s/b39498a49de32664d521398e0f88b4d6718887ecb26d0aee14bcdbafc16e5a80" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:08.462227 systemd[1]: Started cri-containerd-4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f.scope - libcontainer container 4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f. Aug 19 08:10:08.478893 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:10:08.530404 containerd[1576]: time="2025-08-19T08:10:08.530323207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nklwx,Uid:53748459-7a24-45ce-b558-f491069c5fcc,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f\"" Aug 19 08:10:08.569363 systemd-networkd[1479]: cali1822aad00ab: Link UP Aug 19 08:10:08.570615 systemd-networkd[1479]: cali1822aad00ab: Gained carrier Aug 19 08:10:08.585627 containerd[1576]: 2025-08-19 08:10:08.456 [INFO][4161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0 calico-kube-controllers-568b5cdb8- calico-system ddb42597-a313-4a0f-8076-ab24fe970535 801 0 2025-08-19 08:09:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:568b5cdb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-568b5cdb8-ksv62 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1822aad00ab [] [] }} ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Namespace="calico-system" Pod="calico-kube-controllers-568b5cdb8-ksv62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-" Aug 19 08:10:08.585627 containerd[1576]: 2025-08-19 08:10:08.456 [INFO][4161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Namespace="calico-system" Pod="calico-kube-controllers-568b5cdb8-ksv62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" Aug 19 08:10:08.585627 containerd[1576]: 2025-08-19 08:10:08.487 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" HandleID="k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Workload="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.487 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" HandleID="k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Workload="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-568b5cdb8-ksv62", "timestamp":"2025-08-19 08:10:08.487720821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.487 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.488 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.488 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.497 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" host="localhost" Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.504 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.516 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.523 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.532 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:08.585876 containerd[1576]: 2025-08-19 08:10:08.534 [INFO][4207] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" host="localhost" Aug 19 08:10:08.586203 containerd[1576]: 2025-08-19 08:10:08.540 [INFO][4207] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2 Aug 19 08:10:08.586203 containerd[1576]: 2025-08-19 08:10:08.547 [INFO][4207] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" host="localhost" Aug 19 08:10:08.586203 containerd[1576]: 2025-08-19 08:10:08.563 [INFO][4207] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" host="localhost" Aug 19 08:10:08.586203 containerd[1576]: 2025-08-19 08:10:08.563 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" host="localhost" Aug 19 08:10:08.586203 containerd[1576]: 2025-08-19 08:10:08.563 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:10:08.586203 containerd[1576]: 2025-08-19 08:10:08.563 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" HandleID="k8s-pod-network.3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Workload="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" Aug 19 08:10:08.586341 containerd[1576]: 2025-08-19 08:10:08.567 [INFO][4161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Namespace="calico-system" Pod="calico-kube-controllers-568b5cdb8-ksv62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0", GenerateName:"calico-kube-controllers-568b5cdb8-", Namespace:"calico-system", SelfLink:"", UID:"ddb42597-a313-4a0f-8076-ab24fe970535", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"568b5cdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-568b5cdb8-ksv62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1822aad00ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:08.586412 containerd[1576]: 2025-08-19 08:10:08.567 [INFO][4161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Namespace="calico-system" Pod="calico-kube-controllers-568b5cdb8-ksv62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" Aug 19 08:10:08.586412 containerd[1576]: 2025-08-19 08:10:08.567 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1822aad00ab ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Namespace="calico-system" Pod="calico-kube-controllers-568b5cdb8-ksv62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" Aug 19 08:10:08.586412 containerd[1576]: 2025-08-19 08:10:08.571 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Namespace="calico-system" Pod="calico-kube-controllers-568b5cdb8-ksv62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" Aug 19 08:10:08.586483 containerd[1576]: 2025-08-19 08:10:08.571 [INFO][4161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Namespace="calico-system" Pod="calico-kube-controllers-568b5cdb8-ksv62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0", GenerateName:"calico-kube-controllers-568b5cdb8-", Namespace:"calico-system", SelfLink:"", UID:"ddb42597-a313-4a0f-8076-ab24fe970535", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"568b5cdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2", Pod:"calico-kube-controllers-568b5cdb8-ksv62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1822aad00ab", MAC:"9e:6f:b0:a9:98:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:08.586534 containerd[1576]: 2025-08-19 08:10:08.581 [INFO][4161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" Namespace="calico-system" Pod="calico-kube-controllers-568b5cdb8-ksv62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568b5cdb8--ksv62-eth0" Aug 19 08:10:08.608830 containerd[1576]: time="2025-08-19T08:10:08.608734231Z" level=info msg="connecting to shim 3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2" address="unix:///run/containerd/s/85755e95843b80934d17c10cee05ab9d0214a52619cf0e747897a4207cfe83ec" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:08.620206 systemd-networkd[1479]: caliad59457ace1: Gained IPv6LL Aug 19 08:10:08.635357 systemd[1]: Started cri-containerd-3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2.scope - libcontainer container 3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2. Aug 19 08:10:08.652563 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:10:08.685129 containerd[1576]: time="2025-08-19T08:10:08.685055592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-568b5cdb8-ksv62,Uid:ddb42597-a313-4a0f-8076-ab24fe970535,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2\"" Aug 19 08:10:08.913375 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:34758.service - OpenSSH per-connection server daemon (10.0.0.1:34758). Aug 19 08:10:08.987574 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 34758 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:08.989492 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:08.995166 systemd-logind[1555]: New session 8 of user core. Aug 19 08:10:09.005177 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 08:10:09.148123 sshd[4289]: Connection closed by 10.0.0.1 port 34758 Aug 19 08:10:09.148440 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:09.152924 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:34758.service: Deactivated successfully. Aug 19 08:10:09.155447 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 08:10:09.156319 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Aug 19 08:10:09.158287 systemd-logind[1555]: Removed session 8. Aug 19 08:10:09.196167 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Aug 19 08:10:09.963205 systemd-networkd[1479]: cali9b272bf6264: Gained IPv6LL Aug 19 08:10:10.105819 containerd[1576]: time="2025-08-19T08:10:10.105737224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:10.106581 containerd[1576]: time="2025-08-19T08:10:10.106530129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 19 08:10:10.107989 containerd[1576]: time="2025-08-19T08:10:10.107906588Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:10.110682 containerd[1576]: time="2025-08-19T08:10:10.110642944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:10.111516 containerd[1576]: time="2025-08-19T08:10:10.111482137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.733317374s" Aug 19 08:10:10.111585 containerd[1576]: time="2025-08-19T08:10:10.111522532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 19 08:10:10.112957 containerd[1576]: time="2025-08-19T08:10:10.112718783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 19 08:10:10.114559 containerd[1576]: time="2025-08-19T08:10:10.114484561Z" level=info msg="CreateContainer within sandbox \"e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 19 08:10:10.132742 containerd[1576]: time="2025-08-19T08:10:10.132653970Z" level=info msg="Container da19414ea572c1c8a06dcb43ccc1ba48fdcc265c20220447437d378503f70f6b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:10.141803 containerd[1576]: time="2025-08-19T08:10:10.141745773Z" level=info msg="CreateContainer within sandbox \"e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"da19414ea572c1c8a06dcb43ccc1ba48fdcc265c20220447437d378503f70f6b\"" Aug 19 08:10:10.142496 containerd[1576]: time="2025-08-19T08:10:10.142418513Z" level=info msg="StartContainer for \"da19414ea572c1c8a06dcb43ccc1ba48fdcc265c20220447437d378503f70f6b\"" Aug 19 08:10:10.143582 containerd[1576]: time="2025-08-19T08:10:10.143556646Z" level=info msg="connecting to shim da19414ea572c1c8a06dcb43ccc1ba48fdcc265c20220447437d378503f70f6b" address="unix:///run/containerd/s/4c8a9c97ec2c493b428553d93c42b117fd41a0c05b26b258565b8f2ed45409ff" protocol=ttrpc version=3 Aug 19 08:10:10.171304 systemd[1]: Started cri-containerd-da19414ea572c1c8a06dcb43ccc1ba48fdcc265c20220447437d378503f70f6b.scope - libcontainer container da19414ea572c1c8a06dcb43ccc1ba48fdcc265c20220447437d378503f70f6b. Aug 19 08:10:10.230030 containerd[1576]: time="2025-08-19T08:10:10.229492912Z" level=info msg="StartContainer for \"da19414ea572c1c8a06dcb43ccc1ba48fdcc265c20220447437d378503f70f6b\" returns successfully" Aug 19 08:10:10.414560 containerd[1576]: time="2025-08-19T08:10:10.414494210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zzdnh,Uid:488bf5c5-4283-4168-b2e0-bcfa8eee4b04,Namespace:kube-system,Attempt:0,}" Aug 19 08:10:10.414744 containerd[1576]: time="2025-08-19T08:10:10.414534155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-47j5b,Uid:d91662d8-9a6c-4379-a701-086161e4c200,Namespace:calico-system,Attempt:0,}" Aug 19 08:10:10.475299 systemd-networkd[1479]: cali1822aad00ab: Gained IPv6LL Aug 19 08:10:10.635166 systemd-networkd[1479]: calic5313ab3a2b: Link UP Aug 19 08:10:10.636373 systemd-networkd[1479]: calic5313ab3a2b: Gained carrier Aug 19 08:10:10.653080 containerd[1576]: 2025-08-19 08:10:10.465 [INFO][4352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--47j5b-eth0 csi-node-driver- calico-system d91662d8-9a6c-4379-a701-086161e4c200 684 0 2025-08-19 08:09:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-47j5b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic5313ab3a2b [] [] }} ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Namespace="calico-system" Pod="csi-node-driver-47j5b" WorkloadEndpoint="localhost-k8s-csi--node--driver--47j5b-" Aug 19 08:10:10.653080 containerd[1576]: 2025-08-19 08:10:10.465 [INFO][4352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Namespace="calico-system" Pod="csi-node-driver-47j5b" WorkloadEndpoint="localhost-k8s-csi--node--driver--47j5b-eth0" Aug 19 08:10:10.653080 containerd[1576]: 2025-08-19 08:10:10.499 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" HandleID="k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Workload="localhost-k8s-csi--node--driver--47j5b-eth0" Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.499 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" HandleID="k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Workload="localhost-k8s-csi--node--driver--47j5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-47j5b", "timestamp":"2025-08-19 08:10:10.498989415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.499 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.499 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.499 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.588 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" host="localhost" Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.594 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.598 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.600 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.603 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:10.653351 containerd[1576]: 2025-08-19 08:10:10.603 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" host="localhost" Aug 19 08:10:10.653583 containerd[1576]: 2025-08-19 08:10:10.604 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5 Aug 19 08:10:10.653583 containerd[1576]: 2025-08-19 08:10:10.621 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" host="localhost" Aug 19 08:10:10.653583 containerd[1576]: 2025-08-19 08:10:10.627 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" host="localhost" Aug 19 08:10:10.653583 containerd[1576]: 2025-08-19 08:10:10.627 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" host="localhost" Aug 19 08:10:10.653583 containerd[1576]: 2025-08-19 08:10:10.627 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:10:10.653583 containerd[1576]: 2025-08-19 08:10:10.627 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" HandleID="k8s-pod-network.8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Workload="localhost-k8s-csi--node--driver--47j5b-eth0" Aug 19 08:10:10.653701 containerd[1576]: 2025-08-19 08:10:10.631 [INFO][4352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Namespace="calico-system" Pod="csi-node-driver-47j5b" WorkloadEndpoint="localhost-k8s-csi--node--driver--47j5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--47j5b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d91662d8-9a6c-4379-a701-086161e4c200", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-47j5b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic5313ab3a2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:10.653756 containerd[1576]: 2025-08-19 08:10:10.631 [INFO][4352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Namespace="calico-system" Pod="csi-node-driver-47j5b" WorkloadEndpoint="localhost-k8s-csi--node--driver--47j5b-eth0" Aug 19 08:10:10.653756 containerd[1576]: 2025-08-19 08:10:10.631 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5313ab3a2b ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Namespace="calico-system" Pod="csi-node-driver-47j5b" WorkloadEndpoint="localhost-k8s-csi--node--driver--47j5b-eth0" Aug 19 08:10:10.653756 containerd[1576]: 2025-08-19 08:10:10.636 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Namespace="calico-system" Pod="csi-node-driver-47j5b" WorkloadEndpoint="localhost-k8s-csi--node--driver--47j5b-eth0" Aug 19 08:10:10.653822 containerd[1576]: 2025-08-19 08:10:10.637 [INFO][4352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Namespace="calico-system" Pod="csi-node-driver-47j5b" WorkloadEndpoint="localhost-k8s-csi--node--driver--47j5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--47j5b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d91662d8-9a6c-4379-a701-086161e4c200", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5", Pod:"csi-node-driver-47j5b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic5313ab3a2b", MAC:"aa:a1:03:90:86:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:10.653869 containerd[1576]: 2025-08-19 08:10:10.649 [INFO][4352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" Namespace="calico-system" Pod="csi-node-driver-47j5b" WorkloadEndpoint="localhost-k8s-csi--node--driver--47j5b-eth0" Aug 19 08:10:10.673385 containerd[1576]: time="2025-08-19T08:10:10.673320283Z" level=info msg="connecting to shim 8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5" address="unix:///run/containerd/s/724c374fe5777d26dc6b469837e72d34db3c7d4f184a663ef0b3a6a3a95c5589" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:10.709231 systemd[1]: Started cri-containerd-8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5.scope - libcontainer container 8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5. Aug 19 08:10:10.727767 systemd-networkd[1479]: cali964ff875498: Link UP Aug 19 08:10:10.728212 systemd-networkd[1479]: cali964ff875498: Gained carrier Aug 19 08:10:10.728768 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:10:10.747592 containerd[1576]: 2025-08-19 08:10:10.467 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0 coredns-668d6bf9bc- kube-system 488bf5c5-4283-4168-b2e0-bcfa8eee4b04 793 0 2025-08-19 08:09:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-zzdnh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali964ff875498 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zzdnh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zzdnh-" Aug 19 08:10:10.747592 containerd[1576]: 2025-08-19 08:10:10.467 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zzdnh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" Aug 19 08:10:10.747592 containerd[1576]: 2025-08-19 08:10:10.509 [INFO][4381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" HandleID="k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Workload="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.509 [INFO][4381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" HandleID="k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Workload="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001bb780), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-zzdnh", "timestamp":"2025-08-19 08:10:10.509168936 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.509 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.627 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.628 [INFO][4381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.688 [INFO][4381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" host="localhost" Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.694 [INFO][4381] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.700 [INFO][4381] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.702 [INFO][4381] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.704 [INFO][4381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:10.747825 containerd[1576]: 2025-08-19 08:10:10.704 [INFO][4381] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" host="localhost" Aug 19 08:10:10.748131 containerd[1576]: 2025-08-19 08:10:10.706 [INFO][4381] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209 Aug 19 08:10:10.748131 containerd[1576]: 2025-08-19 08:10:10.711 [INFO][4381] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" host="localhost" Aug 19 08:10:10.748131 containerd[1576]: 2025-08-19 08:10:10.720 [INFO][4381] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" host="localhost" Aug 19 08:10:10.748131 containerd[1576]: 2025-08-19 08:10:10.720 [INFO][4381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" host="localhost" Aug 19 08:10:10.748131 containerd[1576]: 2025-08-19 08:10:10.720 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:10:10.748131 containerd[1576]: 2025-08-19 08:10:10.720 [INFO][4381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" HandleID="k8s-pod-network.1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Workload="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" Aug 19 08:10:10.748266 containerd[1576]: 2025-08-19 08:10:10.724 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zzdnh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"488bf5c5-4283-4168-b2e0-bcfa8eee4b04", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-zzdnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali964ff875498", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:10.748343 containerd[1576]: 2025-08-19 08:10:10.724 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zzdnh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" Aug 19 08:10:10.748343 containerd[1576]: 2025-08-19 08:10:10.724 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali964ff875498 ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zzdnh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" Aug 19 08:10:10.748343 containerd[1576]: 2025-08-19 08:10:10.728 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zzdnh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" Aug 19 08:10:10.748405 containerd[1576]: 2025-08-19 08:10:10.729 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zzdnh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"488bf5c5-4283-4168-b2e0-bcfa8eee4b04", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209", Pod:"coredns-668d6bf9bc-zzdnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali964ff875498", MAC:"ca:a3:48:36:c1:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:10.748405 containerd[1576]: 2025-08-19 08:10:10.743 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zzdnh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zzdnh-eth0" Aug 19 08:10:10.756854 containerd[1576]: time="2025-08-19T08:10:10.756812300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-47j5b,Uid:d91662d8-9a6c-4379-a701-086161e4c200,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5\"" Aug 19 08:10:10.782821 containerd[1576]: time="2025-08-19T08:10:10.782770691Z" level=info msg="connecting to shim 1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209" address="unix:///run/containerd/s/9b227b29770151df415c0fa85cb15a2ef6f945e9e3ae1541fc5e63ac821973ab" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:10.816233 systemd[1]: Started cri-containerd-1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209.scope - libcontainer container 1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209. Aug 19 08:10:10.831203 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:10:10.872066 containerd[1576]: time="2025-08-19T08:10:10.872006409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zzdnh,Uid:488bf5c5-4283-4168-b2e0-bcfa8eee4b04,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209\"" Aug 19 08:10:10.875037 containerd[1576]: time="2025-08-19T08:10:10.875003563Z" level=info msg="CreateContainer within sandbox \"1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:10:11.072041 containerd[1576]: time="2025-08-19T08:10:11.071975613Z" level=info msg="Container 6be76ba25d93c3108040ff9d42abfd04cbe1f7041900b301bc91b608559726f2: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:11.093424 containerd[1576]: time="2025-08-19T08:10:11.093348446Z" level=info msg="CreateContainer within sandbox \"1ec01cc9aed24e58e0d1267ad9301aa8339e06348afffdab422e7705cc31e209\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6be76ba25d93c3108040ff9d42abfd04cbe1f7041900b301bc91b608559726f2\"" Aug 19 08:10:11.094168 containerd[1576]: time="2025-08-19T08:10:11.093967114Z" level=info msg="StartContainer for \"6be76ba25d93c3108040ff9d42abfd04cbe1f7041900b301bc91b608559726f2\"" Aug 19 08:10:11.103192 containerd[1576]: time="2025-08-19T08:10:11.103111847Z" level=info msg="connecting to shim 6be76ba25d93c3108040ff9d42abfd04cbe1f7041900b301bc91b608559726f2" address="unix:///run/containerd/s/9b227b29770151df415c0fa85cb15a2ef6f945e9e3ae1541fc5e63ac821973ab" protocol=ttrpc version=3 Aug 19 08:10:11.132249 systemd[1]: Started cri-containerd-6be76ba25d93c3108040ff9d42abfd04cbe1f7041900b301bc91b608559726f2.scope - libcontainer container 6be76ba25d93c3108040ff9d42abfd04cbe1f7041900b301bc91b608559726f2. Aug 19 08:10:11.169792 containerd[1576]: time="2025-08-19T08:10:11.169734074Z" level=info msg="StartContainer for \"6be76ba25d93c3108040ff9d42abfd04cbe1f7041900b301bc91b608559726f2\" returns successfully" Aug 19 08:10:11.413737 containerd[1576]: time="2025-08-19T08:10:11.413577297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m2hsn,Uid:229c0d08-aefa-4837-a92c-1bf8f9651595,Namespace:kube-system,Attempt:0,}" Aug 19 08:10:11.414214 containerd[1576]: time="2025-08-19T08:10:11.413599428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-566fd69f-bjbqv,Uid:bfa76174-249d-4f90-a6f0-cc4f856fe9e2,Namespace:calico-apiserver,Attempt:0,}" Aug 19 08:10:11.551766 systemd-networkd[1479]: cali8e279035634: Link UP Aug 19 08:10:11.552082 systemd-networkd[1479]: cali8e279035634: Gained carrier Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.471 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0 coredns-668d6bf9bc- kube-system 229c0d08-aefa-4837-a92c-1bf8f9651595 799 0 2025-08-19 08:09:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m2hsn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8e279035634 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Namespace="kube-system" Pod="coredns-668d6bf9bc-m2hsn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m2hsn-" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.472 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Namespace="kube-system" Pod="coredns-668d6bf9bc-m2hsn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.508 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" HandleID="k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Workload="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.508 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" HandleID="k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Workload="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7160), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m2hsn", "timestamp":"2025-08-19 08:10:11.50803796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.508 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.508 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.508 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.515 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.519 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.526 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.528 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.531 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.531 [INFO][4568] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.533 [INFO][4568] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76 Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.538 [INFO][4568] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.544 [INFO][4568] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.544 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" host="localhost" Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.544 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:10:11.567258 containerd[1576]: 2025-08-19 08:10:11.544 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" HandleID="k8s-pod-network.f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Workload="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" Aug 19 08:10:11.569553 containerd[1576]: 2025-08-19 08:10:11.547 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Namespace="kube-system" Pod="coredns-668d6bf9bc-m2hsn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"229c0d08-aefa-4837-a92c-1bf8f9651595", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m2hsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e279035634", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:11.569553 containerd[1576]: 2025-08-19 08:10:11.547 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Namespace="kube-system" Pod="coredns-668d6bf9bc-m2hsn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" Aug 19 08:10:11.569553 containerd[1576]: 2025-08-19 08:10:11.548 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e279035634 ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Namespace="kube-system" Pod="coredns-668d6bf9bc-m2hsn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" Aug 19 08:10:11.569553 containerd[1576]: 2025-08-19 08:10:11.551 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Namespace="kube-system" Pod="coredns-668d6bf9bc-m2hsn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" Aug 19 08:10:11.569553 containerd[1576]: 2025-08-19 08:10:11.552 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Namespace="kube-system" Pod="coredns-668d6bf9bc-m2hsn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"229c0d08-aefa-4837-a92c-1bf8f9651595", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76", Pod:"coredns-668d6bf9bc-m2hsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e279035634", MAC:"06:9f:a8:14:06:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:11.569553 containerd[1576]: 2025-08-19 08:10:11.562 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" Namespace="kube-system" Pod="coredns-668d6bf9bc-m2hsn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m2hsn-eth0" Aug 19 08:10:11.604223 containerd[1576]: time="2025-08-19T08:10:11.603539454Z" level=info msg="connecting to shim f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76" address="unix:///run/containerd/s/556e01c04114fcdcd3feb3c392f2f100c6d5529253a9908b13a323b034a21d34" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:11.625410 kubelet[2717]: I0819 08:10:11.624895 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zzdnh" podStartSLOduration=40.624873523 podStartE2EDuration="40.624873523s" podCreationTimestamp="2025-08-19 08:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:10:11.606070055 +0000 UTC m=+46.275749700" watchObservedRunningTime="2025-08-19 08:10:11.624873523 +0000 UTC m=+46.294553138" Aug 19 08:10:11.658165 systemd[1]: Started cri-containerd-f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76.scope - libcontainer container f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76. Aug 19 08:10:11.673777 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:10:11.687667 systemd-networkd[1479]: cali573bde36933: Link UP Aug 19 08:10:11.687914 systemd-networkd[1479]: cali573bde36933: Gained carrier Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.487 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0 calico-apiserver-566fd69f- calico-apiserver bfa76174-249d-4f90-a6f0-cc4f856fe9e2 805 0 2025-08-19 08:09:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:566fd69f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-566fd69f-bjbqv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali573bde36933 [] [] }} ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-bjbqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--bjbqv-" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.488 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-bjbqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.530 [INFO][4579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" HandleID="k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Workload="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.531 [INFO][4579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" HandleID="k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Workload="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e12d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-566fd69f-bjbqv", "timestamp":"2025-08-19 08:10:11.530917896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.531 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.545 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.545 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.617 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.642 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.656 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.658 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.662 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.662 [INFO][4579] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.664 [INFO][4579] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.669 [INFO][4579] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.678 [INFO][4579] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.679 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" host="localhost" Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.679 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:10:11.706398 containerd[1576]: 2025-08-19 08:10:11.679 [INFO][4579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" HandleID="k8s-pod-network.e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Workload="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" Aug 19 08:10:11.707378 containerd[1576]: 2025-08-19 08:10:11.684 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-bjbqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0", GenerateName:"calico-apiserver-566fd69f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfa76174-249d-4f90-a6f0-cc4f856fe9e2", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"566fd69f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-566fd69f-bjbqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali573bde36933", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:11.707378 containerd[1576]: 2025-08-19 08:10:11.684 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-bjbqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" Aug 19 08:10:11.707378 containerd[1576]: 2025-08-19 08:10:11.684 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali573bde36933 ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-bjbqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" Aug 19 08:10:11.707378 containerd[1576]: 2025-08-19 08:10:11.689 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-bjbqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" Aug 19 08:10:11.707378 containerd[1576]: 2025-08-19 08:10:11.689 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-bjbqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0", GenerateName:"calico-apiserver-566fd69f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfa76174-249d-4f90-a6f0-cc4f856fe9e2", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"566fd69f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a", Pod:"calico-apiserver-566fd69f-bjbqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali573bde36933", MAC:"a2:47:45:be:6a:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:11.707378 containerd[1576]: 2025-08-19 08:10:11.700 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-bjbqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--bjbqv-eth0" Aug 19 08:10:11.731236 containerd[1576]: time="2025-08-19T08:10:11.731183506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m2hsn,Uid:229c0d08-aefa-4837-a92c-1bf8f9651595,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76\"" Aug 19 08:10:11.741203 containerd[1576]: time="2025-08-19T08:10:11.741151531Z" level=info msg="CreateContainer within sandbox \"f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:10:11.769806 containerd[1576]: time="2025-08-19T08:10:11.769739120Z" level=info msg="connecting to shim e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a" address="unix:///run/containerd/s/b604789d9454c16a98ca5d52be6096e4a6ebcbd0697cd18fba300ea288d06db6" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:11.802484 containerd[1576]: time="2025-08-19T08:10:11.802422272Z" level=info msg="Container a50dc16e95a7b50452132f799ff92bb72c257246825a754778edbc5960122d2f: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:11.808143 systemd[1]: Started cri-containerd-e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a.scope - libcontainer container e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a. Aug 19 08:10:11.823728 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:10:11.943162 containerd[1576]: time="2025-08-19T08:10:11.942212069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-566fd69f-bjbqv,Uid:bfa76174-249d-4f90-a6f0-cc4f856fe9e2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a\"" Aug 19 08:10:11.943162 containerd[1576]: time="2025-08-19T08:10:11.942515839Z" level=info msg="CreateContainer within sandbox \"f0ffdca0a2cbd0f4af29f35abab5c57a874c931e4e1998696cb7f82f58a32f76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a50dc16e95a7b50452132f799ff92bb72c257246825a754778edbc5960122d2f\"" Aug 19 08:10:11.943376 kubelet[2717]: I0819 08:10:11.942876 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:10:11.945167 containerd[1576]: time="2025-08-19T08:10:11.945096734Z" level=info msg="StartContainer for \"a50dc16e95a7b50452132f799ff92bb72c257246825a754778edbc5960122d2f\"" Aug 19 08:10:11.946455 containerd[1576]: time="2025-08-19T08:10:11.946417369Z" level=info msg="connecting to shim a50dc16e95a7b50452132f799ff92bb72c257246825a754778edbc5960122d2f" address="unix:///run/containerd/s/556e01c04114fcdcd3feb3c392f2f100c6d5529253a9908b13a323b034a21d34" protocol=ttrpc version=3 Aug 19 08:10:11.947992 systemd-networkd[1479]: calic5313ab3a2b: Gained IPv6LL Aug 19 08:10:11.978114 systemd[1]: Started cri-containerd-a50dc16e95a7b50452132f799ff92bb72c257246825a754778edbc5960122d2f.scope - libcontainer container a50dc16e95a7b50452132f799ff92bb72c257246825a754778edbc5960122d2f. Aug 19 08:10:12.012366 systemd-networkd[1479]: cali964ff875498: Gained IPv6LL Aug 19 08:10:12.047239 containerd[1576]: time="2025-08-19T08:10:12.047165214Z" level=info msg="StartContainer for \"a50dc16e95a7b50452132f799ff92bb72c257246825a754778edbc5960122d2f\" returns successfully" Aug 19 08:10:12.128806 containerd[1576]: time="2025-08-19T08:10:12.128740091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a\" id:\"845e13dcc21fa91b13e7a93f829e26336d7402a21a047309320edde3801080c2\" pid:4744 exited_at:{seconds:1755591012 nanos:128333689}" Aug 19 08:10:12.286795 containerd[1576]: time="2025-08-19T08:10:12.286606417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a\" id:\"427abfd356ad7e6d82b91a90e2f5bb86b5958b742d7278af8b925c37fb06b09b\" pid:4778 exited_at:{seconds:1755591012 nanos:285999420}" Aug 19 08:10:12.414113 containerd[1576]: time="2025-08-19T08:10:12.414057997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-566fd69f-m7tq2,Uid:aa8ce407-e0f9-465d-bd7b-80a38f51f364,Namespace:calico-apiserver,Attempt:0,}" Aug 19 08:10:12.546016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596385715.mount: Deactivated successfully. Aug 19 08:10:12.851628 kubelet[2717]: I0819 08:10:12.851107 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m2hsn" podStartSLOduration=41.851084533 podStartE2EDuration="41.851084533s" podCreationTimestamp="2025-08-19 08:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:10:12.850999413 +0000 UTC m=+47.520679048" watchObservedRunningTime="2025-08-19 08:10:12.851084533 +0000 UTC m=+47.520764158" Aug 19 08:10:12.907254 systemd-networkd[1479]: cali573bde36933: Gained IPv6LL Aug 19 08:10:12.985177 systemd-networkd[1479]: cali8e279035634: Gained IPv6LL Aug 19 08:10:13.154284 systemd-networkd[1479]: cali27f741f8bfa: Link UP Aug 19 08:10:13.155372 systemd-networkd[1479]: cali27f741f8bfa: Gained carrier Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:12.928 [INFO][4795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0 calico-apiserver-566fd69f- calico-apiserver aa8ce407-e0f9-465d-bd7b-80a38f51f364 804 0 2025-08-19 08:09:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:566fd69f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-566fd69f-m7tq2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali27f741f8bfa [] [] }} ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-m7tq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--m7tq2-" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:12.929 [INFO][4795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-m7tq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:12.997 [INFO][4812] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" HandleID="k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Workload="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:12.997 [INFO][4812] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" HandleID="k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Workload="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026c430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-566fd69f-m7tq2", "timestamp":"2025-08-19 08:10:12.997392252 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:12.997 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:12.997 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:12.997 [INFO][4812] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.006 [INFO][4812] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.011 [INFO][4812] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.015 [INFO][4812] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.017 [INFO][4812] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.019 [INFO][4812] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.019 [INFO][4812] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.021 [INFO][4812] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1 Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.050 [INFO][4812] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.146 [INFO][4812] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.146 [INFO][4812] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" host="localhost" Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.146 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:10:13.247379 containerd[1576]: 2025-08-19 08:10:13.146 [INFO][4812] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" HandleID="k8s-pod-network.8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Workload="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" Aug 19 08:10:13.248005 containerd[1576]: 2025-08-19 08:10:13.149 [INFO][4795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-m7tq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0", GenerateName:"calico-apiserver-566fd69f-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa8ce407-e0f9-465d-bd7b-80a38f51f364", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"566fd69f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-566fd69f-m7tq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27f741f8bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:13.248005 containerd[1576]: 2025-08-19 08:10:13.150 [INFO][4795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-m7tq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" Aug 19 08:10:13.248005 containerd[1576]: 2025-08-19 08:10:13.150 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27f741f8bfa ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-m7tq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" Aug 19 08:10:13.248005 containerd[1576]: 2025-08-19 08:10:13.155 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-m7tq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" Aug 19 08:10:13.248005 containerd[1576]: 2025-08-19 08:10:13.155 [INFO][4795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-m7tq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0", GenerateName:"calico-apiserver-566fd69f-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa8ce407-e0f9-465d-bd7b-80a38f51f364", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 9, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"566fd69f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1", Pod:"calico-apiserver-566fd69f-m7tq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27f741f8bfa", MAC:"66:cb:e7:ae:a9:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:10:13.248005 containerd[1576]: 2025-08-19 08:10:13.242 [INFO][4795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" Namespace="calico-apiserver" Pod="calico-apiserver-566fd69f-m7tq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--566fd69f--m7tq2-eth0" Aug 19 08:10:13.800890 containerd[1576]: time="2025-08-19T08:10:13.800782170Z" level=info msg="connecting to shim 8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1" address="unix:///run/containerd/s/9ee9c04a47fef4dd8390928194dfcc123689290a9e582060abe22539c5d7b912" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:13.803886 containerd[1576]: time="2025-08-19T08:10:13.803821986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:13.805089 containerd[1576]: time="2025-08-19T08:10:13.805069173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 19 08:10:13.807976 containerd[1576]: time="2025-08-19T08:10:13.806354240Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:13.809356 containerd[1576]: time="2025-08-19T08:10:13.809260014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:13.810837 containerd[1576]: time="2025-08-19T08:10:13.810617167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.697861636s" Aug 19 08:10:13.810837 containerd[1576]: time="2025-08-19T08:10:13.810683973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 19 08:10:13.813346 containerd[1576]: time="2025-08-19T08:10:13.813313930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 19 08:10:13.814410 containerd[1576]: time="2025-08-19T08:10:13.814379937Z" level=info msg="CreateContainer within sandbox \"4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 19 08:10:13.827289 containerd[1576]: time="2025-08-19T08:10:13.827241957Z" level=info msg="Container 6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:13.830192 systemd[1]: Started cri-containerd-8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1.scope - libcontainer container 8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1. Aug 19 08:10:13.839062 containerd[1576]: time="2025-08-19T08:10:13.839007050Z" level=info msg="CreateContainer within sandbox \"4b54674df08a1828cdc762f3ac150b464745ad878bd320a9894ddaf7abc56f3f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b\"" Aug 19 08:10:13.841002 containerd[1576]: time="2025-08-19T08:10:13.840970319Z" level=info msg="StartContainer for \"6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b\"" Aug 19 08:10:13.842673 containerd[1576]: time="2025-08-19T08:10:13.842632643Z" level=info msg="connecting to shim 6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b" address="unix:///run/containerd/s/b39498a49de32664d521398e0f88b4d6718887ecb26d0aee14bcdbafc16e5a80" protocol=ttrpc version=3 Aug 19 08:10:13.853061 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:10:13.867192 systemd[1]: Started cri-containerd-6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b.scope - libcontainer container 6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b. Aug 19 08:10:14.016290 containerd[1576]: time="2025-08-19T08:10:14.016238290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-566fd69f-m7tq2,Uid:aa8ce407-e0f9-465d-bd7b-80a38f51f364,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1\"" Aug 19 08:10:14.022993 containerd[1576]: time="2025-08-19T08:10:14.022910322Z" level=info msg="StartContainer for \"6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b\" returns successfully" Aug 19 08:10:14.164013 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:34762.service - OpenSSH per-connection server daemon (10.0.0.1:34762). Aug 19 08:10:14.231692 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 34762 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:14.233423 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:14.237721 systemd-logind[1555]: New session 9 of user core. Aug 19 08:10:14.247066 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 08:10:14.379798 sshd[4930]: Connection closed by 10.0.0.1 port 34762 Aug 19 08:10:14.380183 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:14.384327 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:34762.service: Deactivated successfully. Aug 19 08:10:14.386230 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 08:10:14.387049 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Aug 19 08:10:14.388512 systemd-logind[1555]: Removed session 9. Aug 19 08:10:14.891162 systemd-networkd[1479]: cali27f741f8bfa: Gained IPv6LL Aug 19 08:10:15.707515 containerd[1576]: time="2025-08-19T08:10:15.707467093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b\" id:\"8ca801bf6e0ee742c8f22a202fac9b2299edcf7422ad636d0eeb703ddd015949\" pid:4958 exit_status:1 exited_at:{seconds:1755591015 nanos:707020345}" Aug 19 08:10:16.789837 containerd[1576]: time="2025-08-19T08:10:16.789743039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:16.790864 containerd[1576]: time="2025-08-19T08:10:16.790809568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 19 08:10:16.802202 containerd[1576]: time="2025-08-19T08:10:16.801588425Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:16.805540 containerd[1576]: time="2025-08-19T08:10:16.805465170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:16.806220 containerd[1576]: time="2025-08-19T08:10:16.806177194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.991880553s" Aug 19 08:10:16.806279 containerd[1576]: time="2025-08-19T08:10:16.806225985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 19 08:10:16.808920 containerd[1576]: time="2025-08-19T08:10:16.807757174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 19 08:10:16.817600 containerd[1576]: time="2025-08-19T08:10:16.817536920Z" level=info msg="CreateContainer within sandbox \"3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 19 08:10:16.830992 containerd[1576]: time="2025-08-19T08:10:16.830693533Z" level=info msg="Container 168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:16.841788 containerd[1576]: time="2025-08-19T08:10:16.841727639Z" level=info msg="CreateContainer within sandbox \"3b1fb9013cf2991d343ec27c50b32c0c13852cb7f55053b3bfb2c9a849cda2b2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926\"" Aug 19 08:10:16.842305 containerd[1576]: time="2025-08-19T08:10:16.842279243Z" level=info msg="StartContainer for \"168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926\"" Aug 19 08:10:16.844574 containerd[1576]: time="2025-08-19T08:10:16.844522366Z" level=info msg="connecting to shim 168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926" address="unix:///run/containerd/s/85755e95843b80934d17c10cee05ab9d0214a52619cf0e747897a4207cfe83ec" protocol=ttrpc version=3 Aug 19 08:10:16.882164 systemd[1]: Started cri-containerd-168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926.scope - libcontainer container 168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926. Aug 19 08:10:16.900882 containerd[1576]: time="2025-08-19T08:10:16.900782187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b\" id:\"a9c4af2ca3c50502214cd453a2be21358d7e27f9f496f1caf170d5013a25d479\" pid:4987 exited_at:{seconds:1755591016 nanos:900422393}" Aug 19 08:10:16.916983 kubelet[2717]: I0819 08:10:16.916227 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-nklwx" podStartSLOduration=29.638318716 podStartE2EDuration="34.916210256s" podCreationTimestamp="2025-08-19 08:09:42 +0000 UTC" firstStartedPulling="2025-08-19 08:10:08.534375057 +0000 UTC m=+43.204054682" lastFinishedPulling="2025-08-19 08:10:13.812266597 +0000 UTC m=+48.481946222" observedRunningTime="2025-08-19 08:10:14.618051773 +0000 UTC m=+49.287731398" watchObservedRunningTime="2025-08-19 08:10:16.916210256 +0000 UTC m=+51.585889871" Aug 19 08:10:16.947745 containerd[1576]: time="2025-08-19T08:10:16.947679003Z" level=info msg="StartContainer for \"168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926\" returns successfully" Aug 19 08:10:17.682976 kubelet[2717]: I0819 08:10:17.682881 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-568b5cdb8-ksv62" podStartSLOduration=26.562951016 podStartE2EDuration="34.68285641s" podCreationTimestamp="2025-08-19 08:09:43 +0000 UTC" firstStartedPulling="2025-08-19 08:10:08.687443816 +0000 UTC m=+43.357123441" lastFinishedPulling="2025-08-19 08:10:16.80734921 +0000 UTC m=+51.477028835" observedRunningTime="2025-08-19 08:10:17.682303183 +0000 UTC m=+52.351982818" watchObservedRunningTime="2025-08-19 08:10:17.68285641 +0000 UTC m=+52.352536035" Aug 19 08:10:18.664638 containerd[1576]: time="2025-08-19T08:10:18.664582532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926\" id:\"fd35a0efabd02b9de15666d40fd0e1f483b2b610204786ec7ff15f6bf4776346\" pid:5063 exited_at:{seconds:1755591018 nanos:664274866}" Aug 19 08:10:19.393117 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:53396.service - OpenSSH per-connection server daemon (10.0.0.1:53396). Aug 19 08:10:19.463989 sshd[5074]: Accepted publickey for core from 10.0.0.1 port 53396 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:19.465475 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:19.470299 systemd-logind[1555]: New session 10 of user core. Aug 19 08:10:19.488087 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 08:10:19.716129 sshd[5077]: Connection closed by 10.0.0.1 port 53396 Aug 19 08:10:19.716617 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:19.721715 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:53396.service: Deactivated successfully. Aug 19 08:10:19.725045 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 08:10:19.728343 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Aug 19 08:10:19.730839 systemd-logind[1555]: Removed session 10. Aug 19 08:10:21.517385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859126191.mount: Deactivated successfully. Aug 19 08:10:21.539126 containerd[1576]: time="2025-08-19T08:10:21.539064688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:21.539903 containerd[1576]: time="2025-08-19T08:10:21.539863055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 19 08:10:21.541222 containerd[1576]: time="2025-08-19T08:10:21.541165476Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:21.543666 containerd[1576]: time="2025-08-19T08:10:21.543625947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:21.544435 containerd[1576]: time="2025-08-19T08:10:21.544400298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.735490233s" Aug 19 08:10:21.544502 containerd[1576]: time="2025-08-19T08:10:21.544435644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 19 08:10:21.545567 containerd[1576]: time="2025-08-19T08:10:21.545523402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 19 08:10:21.548339 containerd[1576]: time="2025-08-19T08:10:21.548273967Z" level=info msg="CreateContainer within sandbox \"e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 19 08:10:21.558690 containerd[1576]: time="2025-08-19T08:10:21.558632771Z" level=info msg="Container 41aba59e038b25e105037d3a3b27f1741863647811f937bac1bd9a4832b7feab: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:21.567651 containerd[1576]: time="2025-08-19T08:10:21.567604173Z" level=info msg="CreateContainer within sandbox \"e915a55b00758aff533c3d66a880169d0930d4b3cf317ac9f7dadee054342c6d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"41aba59e038b25e105037d3a3b27f1741863647811f937bac1bd9a4832b7feab\"" Aug 19 08:10:21.568193 containerd[1576]: time="2025-08-19T08:10:21.568157049Z" level=info msg="StartContainer for \"41aba59e038b25e105037d3a3b27f1741863647811f937bac1bd9a4832b7feab\"" Aug 19 08:10:21.569287 containerd[1576]: time="2025-08-19T08:10:21.569230631Z" level=info msg="connecting to shim 41aba59e038b25e105037d3a3b27f1741863647811f937bac1bd9a4832b7feab" address="unix:///run/containerd/s/4c8a9c97ec2c493b428553d93c42b117fd41a0c05b26b258565b8f2ed45409ff" protocol=ttrpc version=3 Aug 19 08:10:21.595112 systemd[1]: Started cri-containerd-41aba59e038b25e105037d3a3b27f1741863647811f937bac1bd9a4832b7feab.scope - libcontainer container 41aba59e038b25e105037d3a3b27f1741863647811f937bac1bd9a4832b7feab. Aug 19 08:10:21.641820 containerd[1576]: time="2025-08-19T08:10:21.641762149Z" level=info msg="StartContainer for \"41aba59e038b25e105037d3a3b27f1741863647811f937bac1bd9a4832b7feab\" returns successfully" Aug 19 08:10:22.732833 kubelet[2717]: I0819 08:10:22.732537 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-779bcc4777-462r6" podStartSLOduration=3.564593928 podStartE2EDuration="16.732515652s" podCreationTimestamp="2025-08-19 08:10:06 +0000 UTC" firstStartedPulling="2025-08-19 08:10:08.377443152 +0000 UTC m=+43.047122777" lastFinishedPulling="2025-08-19 08:10:21.545364876 +0000 UTC m=+56.215044501" observedRunningTime="2025-08-19 08:10:22.730994601 +0000 UTC m=+57.400674247" watchObservedRunningTime="2025-08-19 08:10:22.732515652 +0000 UTC m=+57.402195287" Aug 19 08:10:24.101696 containerd[1576]: time="2025-08-19T08:10:24.101455154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:24.103131 containerd[1576]: time="2025-08-19T08:10:24.103062476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 19 08:10:24.104785 containerd[1576]: time="2025-08-19T08:10:24.104689806Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:24.107581 containerd[1576]: time="2025-08-19T08:10:24.107436604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:24.108084 containerd[1576]: time="2025-08-19T08:10:24.108034034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.562475085s" Aug 19 08:10:24.108182 containerd[1576]: time="2025-08-19T08:10:24.108088937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 19 08:10:24.109902 containerd[1576]: time="2025-08-19T08:10:24.109443595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 08:10:24.118100 containerd[1576]: time="2025-08-19T08:10:24.118028335Z" level=info msg="CreateContainer within sandbox \"8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 19 08:10:24.392795 containerd[1576]: time="2025-08-19T08:10:24.392332793Z" level=info msg="Container 33aa12609481d8e6ee124cb5c74b5cb56deaed726dae1eb8ea7e09bf8876c976: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:24.406431 containerd[1576]: time="2025-08-19T08:10:24.406373197Z" level=info msg="CreateContainer within sandbox \"8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"33aa12609481d8e6ee124cb5c74b5cb56deaed726dae1eb8ea7e09bf8876c976\"" Aug 19 08:10:24.407233 containerd[1576]: time="2025-08-19T08:10:24.407004971Z" level=info msg="StartContainer for \"33aa12609481d8e6ee124cb5c74b5cb56deaed726dae1eb8ea7e09bf8876c976\"" Aug 19 08:10:24.408616 containerd[1576]: time="2025-08-19T08:10:24.408448005Z" level=info msg="connecting to shim 33aa12609481d8e6ee124cb5c74b5cb56deaed726dae1eb8ea7e09bf8876c976" address="unix:///run/containerd/s/724c374fe5777d26dc6b469837e72d34db3c7d4f184a663ef0b3a6a3a95c5589" protocol=ttrpc version=3 Aug 19 08:10:24.446316 systemd[1]: Started cri-containerd-33aa12609481d8e6ee124cb5c74b5cb56deaed726dae1eb8ea7e09bf8876c976.scope - libcontainer container 33aa12609481d8e6ee124cb5c74b5cb56deaed726dae1eb8ea7e09bf8876c976. Aug 19 08:10:24.590205 containerd[1576]: time="2025-08-19T08:10:24.589445844Z" level=info msg="StartContainer for \"33aa12609481d8e6ee124cb5c74b5cb56deaed726dae1eb8ea7e09bf8876c976\" returns successfully" Aug 19 08:10:24.731299 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:53408.service - OpenSSH per-connection server daemon (10.0.0.1:53408). Aug 19 08:10:24.806789 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 53408 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:24.808908 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:24.813694 systemd-logind[1555]: New session 11 of user core. Aug 19 08:10:24.828372 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 08:10:24.960625 sshd[5173]: Connection closed by 10.0.0.1 port 53408 Aug 19 08:10:24.960927 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:24.973346 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:53408.service: Deactivated successfully. Aug 19 08:10:24.975552 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 08:10:24.976673 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Aug 19 08:10:24.980402 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:53422.service - OpenSSH per-connection server daemon (10.0.0.1:53422). Aug 19 08:10:24.981240 systemd-logind[1555]: Removed session 11. Aug 19 08:10:25.043970 sshd[5187]: Accepted publickey for core from 10.0.0.1 port 53422 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:25.045402 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:25.050036 systemd-logind[1555]: New session 12 of user core. Aug 19 08:10:25.060090 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 08:10:25.356017 sshd[5190]: Connection closed by 10.0.0.1 port 53422 Aug 19 08:10:25.357154 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:25.368367 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:53422.service: Deactivated successfully. Aug 19 08:10:25.370494 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 08:10:25.371648 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Aug 19 08:10:25.374633 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:53434.service - OpenSSH per-connection server daemon (10.0.0.1:53434). Aug 19 08:10:25.375488 systemd-logind[1555]: Removed session 12. Aug 19 08:10:25.433987 sshd[5201]: Accepted publickey for core from 10.0.0.1 port 53434 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:25.435641 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:25.440400 systemd-logind[1555]: New session 13 of user core. Aug 19 08:10:25.447073 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 08:10:25.616243 sshd[5206]: Connection closed by 10.0.0.1 port 53434 Aug 19 08:10:25.616531 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:25.621612 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:53434.service: Deactivated successfully. Aug 19 08:10:25.623841 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 08:10:25.624781 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Aug 19 08:10:25.626213 systemd-logind[1555]: Removed session 13. Aug 19 08:10:28.367560 containerd[1576]: time="2025-08-19T08:10:28.367482051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:28.385609 containerd[1576]: time="2025-08-19T08:10:28.368254729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 19 08:10:28.385609 containerd[1576]: time="2025-08-19T08:10:28.369579384Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:28.385835 containerd[1576]: time="2025-08-19T08:10:28.372048057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.262562392s" Aug 19 08:10:28.385835 containerd[1576]: time="2025-08-19T08:10:28.385705000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 19 08:10:28.386430 containerd[1576]: time="2025-08-19T08:10:28.386364437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:28.396258 containerd[1576]: time="2025-08-19T08:10:28.396229967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 08:10:28.411050 containerd[1576]: time="2025-08-19T08:10:28.410997672Z" level=info msg="CreateContainer within sandbox \"e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 08:10:28.428713 containerd[1576]: time="2025-08-19T08:10:28.428661136Z" level=info msg="Container 2a375f21092c8f80f1eb199eb6171952c15ec81197ecacd3feb37d0a15f58e72: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:28.436456 containerd[1576]: time="2025-08-19T08:10:28.436412090Z" level=info msg="CreateContainer within sandbox \"e997205804b29f310f6a15b27a24e3d4754bcb9d2a835aca84a223001c034c7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2a375f21092c8f80f1eb199eb6171952c15ec81197ecacd3feb37d0a15f58e72\"" Aug 19 08:10:28.437064 containerd[1576]: time="2025-08-19T08:10:28.436983096Z" level=info msg="StartContainer for \"2a375f21092c8f80f1eb199eb6171952c15ec81197ecacd3feb37d0a15f58e72\"" Aug 19 08:10:28.438193 containerd[1576]: time="2025-08-19T08:10:28.438162441Z" level=info msg="connecting to shim 2a375f21092c8f80f1eb199eb6171952c15ec81197ecacd3feb37d0a15f58e72" address="unix:///run/containerd/s/b604789d9454c16a98ca5d52be6096e4a6ebcbd0697cd18fba300ea288d06db6" protocol=ttrpc version=3 Aug 19 08:10:28.476114 systemd[1]: Started cri-containerd-2a375f21092c8f80f1eb199eb6171952c15ec81197ecacd3feb37d0a15f58e72.scope - libcontainer container 2a375f21092c8f80f1eb199eb6171952c15ec81197ecacd3feb37d0a15f58e72. Aug 19 08:10:28.527561 containerd[1576]: time="2025-08-19T08:10:28.527505438Z" level=info msg="StartContainer for \"2a375f21092c8f80f1eb199eb6171952c15ec81197ecacd3feb37d0a15f58e72\" returns successfully" Aug 19 08:10:28.768150 containerd[1576]: time="2025-08-19T08:10:28.768051057Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:28.768809 containerd[1576]: time="2025-08-19T08:10:28.768785841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 19 08:10:28.770558 containerd[1576]: time="2025-08-19T08:10:28.770529518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 374.273341ms" Aug 19 08:10:28.770626 containerd[1576]: time="2025-08-19T08:10:28.770560589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 19 08:10:28.771602 containerd[1576]: time="2025-08-19T08:10:28.771562910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 19 08:10:28.772819 containerd[1576]: time="2025-08-19T08:10:28.772792491Z" level=info msg="CreateContainer within sandbox \"8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 08:10:28.786990 containerd[1576]: time="2025-08-19T08:10:28.786150596Z" level=info msg="Container c29bbd18aaea993b821266327d617673d3f8b1e38ac17b3e65f48f7bd8944881: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:28.794607 containerd[1576]: time="2025-08-19T08:10:28.794542411Z" level=info msg="CreateContainer within sandbox \"8337d7f1f4f9ae74cd29aad24a72a4392d046ebcc01e4121817f7cee1c5ffdc1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c29bbd18aaea993b821266327d617673d3f8b1e38ac17b3e65f48f7bd8944881\"" Aug 19 08:10:28.795145 containerd[1576]: time="2025-08-19T08:10:28.795109059Z" level=info msg="StartContainer for \"c29bbd18aaea993b821266327d617673d3f8b1e38ac17b3e65f48f7bd8944881\"" Aug 19 08:10:28.796173 containerd[1576]: time="2025-08-19T08:10:28.796142380Z" level=info msg="connecting to shim c29bbd18aaea993b821266327d617673d3f8b1e38ac17b3e65f48f7bd8944881" address="unix:///run/containerd/s/9ee9c04a47fef4dd8390928194dfcc123689290a9e582060abe22539c5d7b912" protocol=ttrpc version=3 Aug 19 08:10:28.822126 systemd[1]: Started cri-containerd-c29bbd18aaea993b821266327d617673d3f8b1e38ac17b3e65f48f7bd8944881.scope - libcontainer container c29bbd18aaea993b821266327d617673d3f8b1e38ac17b3e65f48f7bd8944881. Aug 19 08:10:28.873525 containerd[1576]: time="2025-08-19T08:10:28.873486657Z" level=info msg="StartContainer for \"c29bbd18aaea993b821266327d617673d3f8b1e38ac17b3e65f48f7bd8944881\" returns successfully" Aug 19 08:10:29.577486 kubelet[2717]: I0819 08:10:29.577383 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-566fd69f-bjbqv" podStartSLOduration=32.129028617 podStartE2EDuration="48.577362576s" podCreationTimestamp="2025-08-19 08:09:41 +0000 UTC" firstStartedPulling="2025-08-19 08:10:11.947764051 +0000 UTC m=+46.617443677" lastFinishedPulling="2025-08-19 08:10:28.396098011 +0000 UTC m=+63.065777636" observedRunningTime="2025-08-19 08:10:28.679094678 +0000 UTC m=+63.348774303" watchObservedRunningTime="2025-08-19 08:10:29.577362576 +0000 UTC m=+64.247042202" Aug 19 08:10:30.634276 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:39180.service - OpenSSH per-connection server daemon (10.0.0.1:39180). Aug 19 08:10:30.688195 kubelet[2717]: I0819 08:10:30.688116 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-566fd69f-m7tq2" podStartSLOduration=34.934178292 podStartE2EDuration="49.688079158s" podCreationTimestamp="2025-08-19 08:09:41 +0000 UTC" firstStartedPulling="2025-08-19 08:10:14.017559345 +0000 UTC m=+48.687238970" lastFinishedPulling="2025-08-19 08:10:28.771460221 +0000 UTC m=+63.441139836" observedRunningTime="2025-08-19 08:10:29.672018182 +0000 UTC m=+64.341697807" watchObservedRunningTime="2025-08-19 08:10:30.688079158 +0000 UTC m=+65.357758773" Aug 19 08:10:30.718244 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 39180 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:30.720181 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:30.725566 systemd-logind[1555]: New session 14 of user core. Aug 19 08:10:30.735085 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 08:10:30.895918 sshd[5319]: Connection closed by 10.0.0.1 port 39180 Aug 19 08:10:30.896250 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:30.901217 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:39180.service: Deactivated successfully. Aug 19 08:10:30.903188 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 08:10:30.904075 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Aug 19 08:10:30.905476 systemd-logind[1555]: Removed session 14. Aug 19 08:10:32.430355 containerd[1576]: time="2025-08-19T08:10:32.430305928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:32.431178 containerd[1576]: time="2025-08-19T08:10:32.431117415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 19 08:10:32.432660 containerd[1576]: time="2025-08-19T08:10:32.432610036Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:32.434569 containerd[1576]: time="2025-08-19T08:10:32.434535193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:10:32.435160 containerd[1576]: time="2025-08-19T08:10:32.435127646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.663518046s" Aug 19 08:10:32.435160 containerd[1576]: time="2025-08-19T08:10:32.435158847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 19 08:10:32.447717 containerd[1576]: time="2025-08-19T08:10:32.447385389Z" level=info msg="CreateContainer within sandbox \"8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 19 08:10:32.460352 containerd[1576]: time="2025-08-19T08:10:32.460291873Z" level=info msg="Container d09f5d1c98955336d78c62e1cb3db2687003ce587d3c8abba59cae1b9c8f2151: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:32.470496 containerd[1576]: time="2025-08-19T08:10:32.470454792Z" level=info msg="CreateContainer within sandbox \"8b2b4d7de95b34cadaf52b98989df1d55d7272eb03e61d43b031362d86195ca5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d09f5d1c98955336d78c62e1cb3db2687003ce587d3c8abba59cae1b9c8f2151\"" Aug 19 08:10:32.471144 containerd[1576]: time="2025-08-19T08:10:32.471103765Z" level=info msg="StartContainer for \"d09f5d1c98955336d78c62e1cb3db2687003ce587d3c8abba59cae1b9c8f2151\"" Aug 19 08:10:32.472766 containerd[1576]: time="2025-08-19T08:10:32.472730776Z" level=info msg="connecting to shim d09f5d1c98955336d78c62e1cb3db2687003ce587d3c8abba59cae1b9c8f2151" address="unix:///run/containerd/s/724c374fe5777d26dc6b469837e72d34db3c7d4f184a663ef0b3a6a3a95c5589" protocol=ttrpc version=3 Aug 19 08:10:32.493080 systemd[1]: Started cri-containerd-d09f5d1c98955336d78c62e1cb3db2687003ce587d3c8abba59cae1b9c8f2151.scope - libcontainer container d09f5d1c98955336d78c62e1cb3db2687003ce587d3c8abba59cae1b9c8f2151. Aug 19 08:10:32.542103 containerd[1576]: time="2025-08-19T08:10:32.542050893Z" level=info msg="StartContainer for \"d09f5d1c98955336d78c62e1cb3db2687003ce587d3c8abba59cae1b9c8f2151\" returns successfully" Aug 19 08:10:33.489116 kubelet[2717]: I0819 08:10:33.489059 2717 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 19 08:10:33.489116 kubelet[2717]: I0819 08:10:33.489108 2717 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 19 08:10:35.910725 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:39192.service - OpenSSH per-connection server daemon (10.0.0.1:39192). Aug 19 08:10:35.976648 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 39192 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:35.978447 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:35.983391 systemd-logind[1555]: New session 15 of user core. Aug 19 08:10:35.993084 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 08:10:36.125242 sshd[5387]: Connection closed by 10.0.0.1 port 39192 Aug 19 08:10:36.125672 sshd-session[5383]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:36.131314 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:39192.service: Deactivated successfully. Aug 19 08:10:36.133827 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 08:10:36.134915 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Aug 19 08:10:36.136793 systemd-logind[1555]: Removed session 15. Aug 19 08:10:41.141826 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:37562.service - OpenSSH per-connection server daemon (10.0.0.1:37562). Aug 19 08:10:41.192055 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 37562 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:41.193622 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:41.197920 systemd-logind[1555]: New session 16 of user core. Aug 19 08:10:41.208161 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 08:10:41.323691 sshd[5405]: Connection closed by 10.0.0.1 port 37562 Aug 19 08:10:41.324080 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:41.328086 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:37562.service: Deactivated successfully. Aug 19 08:10:41.330196 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 08:10:41.331100 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Aug 19 08:10:41.332299 systemd-logind[1555]: Removed session 16. Aug 19 08:10:42.228522 containerd[1576]: time="2025-08-19T08:10:42.228470696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84f66f4169a6ae6aeaa76c03346c46fb70983f8946ecf4c6db1975db4586526a\" id:\"66cc5a046fc747cd8075d6aeecbf1a7db84f5d617eb2bc73025d58c03e17b8a1\" pid:5428 exited_at:{seconds:1755591042 nanos:227403058}" Aug 19 08:10:46.337402 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:37564.service - OpenSSH per-connection server daemon (10.0.0.1:37564). Aug 19 08:10:46.411543 sshd[5441]: Accepted publickey for core from 10.0.0.1 port 37564 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:46.413523 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:46.418305 systemd-logind[1555]: New session 17 of user core. Aug 19 08:10:46.428097 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 08:10:46.581143 sshd[5444]: Connection closed by 10.0.0.1 port 37564 Aug 19 08:10:46.581645 sshd-session[5441]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:46.595118 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:37564.service: Deactivated successfully. Aug 19 08:10:46.597115 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 08:10:46.597931 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Aug 19 08:10:46.600863 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:37570.service - OpenSSH per-connection server daemon (10.0.0.1:37570). Aug 19 08:10:46.601689 systemd-logind[1555]: Removed session 17. Aug 19 08:10:46.658635 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 37570 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:46.660390 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:46.665746 systemd-logind[1555]: New session 18 of user core. Aug 19 08:10:46.674169 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 08:10:46.715600 containerd[1576]: time="2025-08-19T08:10:46.715526162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b\" id:\"fa2898ec6c8e4b586de309837d9706cde15faa198b3c7edddf4650a074059390\" pid:5471 exited_at:{seconds:1755591046 nanos:715073677}" Aug 19 08:10:47.033550 sshd[5484]: Connection closed by 10.0.0.1 port 37570 Aug 19 08:10:47.034027 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:47.046101 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:37570.service: Deactivated successfully. Aug 19 08:10:47.048771 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 08:10:47.049701 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Aug 19 08:10:47.052822 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:37574.service - OpenSSH per-connection server daemon (10.0.0.1:37574). Aug 19 08:10:47.054094 systemd-logind[1555]: Removed session 18. Aug 19 08:10:47.113668 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 37574 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:47.115602 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:47.121727 systemd-logind[1555]: New session 19 of user core. Aug 19 08:10:47.131106 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 08:10:47.949856 sshd[5499]: Connection closed by 10.0.0.1 port 37574 Aug 19 08:10:47.952188 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:47.963077 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:37574.service: Deactivated successfully. Aug 19 08:10:47.966182 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 08:10:47.967219 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Aug 19 08:10:47.973881 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:56058.service - OpenSSH per-connection server daemon (10.0.0.1:56058). Aug 19 08:10:47.976294 systemd-logind[1555]: Removed session 19. Aug 19 08:10:48.037357 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 56058 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:48.039458 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:48.045431 systemd-logind[1555]: New session 20 of user core. Aug 19 08:10:48.056250 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 08:10:48.338225 sshd[5526]: Connection closed by 10.0.0.1 port 56058 Aug 19 08:10:48.338058 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:48.352188 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:56058.service: Deactivated successfully. Aug 19 08:10:48.354849 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 08:10:48.355919 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Aug 19 08:10:48.359778 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:56062.service - OpenSSH per-connection server daemon (10.0.0.1:56062). Aug 19 08:10:48.360520 systemd-logind[1555]: Removed session 20. Aug 19 08:10:48.419965 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 56062 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:48.421829 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:48.426775 systemd-logind[1555]: New session 21 of user core. Aug 19 08:10:48.434178 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 08:10:48.555494 sshd[5547]: Connection closed by 10.0.0.1 port 56062 Aug 19 08:10:48.555912 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:48.561716 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:56062.service: Deactivated successfully. Aug 19 08:10:48.564067 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 08:10:48.564838 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Aug 19 08:10:48.566180 systemd-logind[1555]: Removed session 21. Aug 19 08:10:48.716668 containerd[1576]: time="2025-08-19T08:10:48.716603723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926\" id:\"99d0f19db9d5ab52b966093fbb15235db9284c784d37b8112b6097b58f80f2f9\" pid:5571 exited_at:{seconds:1755591048 nanos:716309402}" Aug 19 08:10:53.569341 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:56074.service - OpenSSH per-connection server daemon (10.0.0.1:56074). Aug 19 08:10:53.632620 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 56074 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:53.634927 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:53.640026 systemd-logind[1555]: New session 22 of user core. Aug 19 08:10:53.647110 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 08:10:53.764039 sshd[5587]: Connection closed by 10.0.0.1 port 56074 Aug 19 08:10:53.764406 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:53.769780 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:56074.service: Deactivated successfully. Aug 19 08:10:53.772128 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 08:10:53.773080 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Aug 19 08:10:53.774653 systemd-logind[1555]: Removed session 22. Aug 19 08:10:58.781157 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:34760.service - OpenSSH per-connection server daemon (10.0.0.1:34760). Aug 19 08:10:58.832959 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 34760 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:58.834542 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:58.839464 systemd-logind[1555]: New session 23 of user core. Aug 19 08:10:58.848082 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 08:10:58.956190 sshd[5605]: Connection closed by 10.0.0.1 port 34760 Aug 19 08:10:58.956512 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:58.960553 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:34760.service: Deactivated successfully. Aug 19 08:10:58.962791 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 08:10:58.963708 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Aug 19 08:10:58.964887 systemd-logind[1555]: Removed session 23. Aug 19 08:11:03.968694 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:34774.service - OpenSSH per-connection server daemon (10.0.0.1:34774). Aug 19 08:11:04.039138 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 34774 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:11:04.041027 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:11:04.047004 systemd-logind[1555]: New session 24 of user core. Aug 19 08:11:04.056132 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 08:11:04.223132 sshd[5623]: Connection closed by 10.0.0.1 port 34774 Aug 19 08:11:04.223418 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Aug 19 08:11:04.227869 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:34774.service: Deactivated successfully. Aug 19 08:11:04.230204 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 08:11:04.231327 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Aug 19 08:11:04.232816 systemd-logind[1555]: Removed session 24. Aug 19 08:11:07.199616 containerd[1576]: time="2025-08-19T08:11:07.199544929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"168e692c78a120d2706a00802ff7005412568c371b9907e26458d4b38d8c7926\" id:\"c3686a8d4292d13d968c377cfe77c12f4373ab6d1c8f767290884c9f608b375f\" pid:5648 exited_at:{seconds:1755591067 nanos:199229320}" Aug 19 08:11:07.389402 containerd[1576]: time="2025-08-19T08:11:07.389301637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ae51c4eb690876cdc31984ee254d4d54c2de493d97237a32d23980a3474f53b\" id:\"ebaaa88c94fee2e915b13d3ef9fa04cf17f6fefd027bf41f35c5805145e6ad3e\" pid:5670 exited_at:{seconds:1755591067 nanos:388847134}" Aug 19 08:11:09.240340 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:48930.service - OpenSSH per-connection server daemon (10.0.0.1:48930). Aug 19 08:11:09.311303 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 48930 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:11:09.314434 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:11:09.319977 systemd-logind[1555]: New session 25 of user core. Aug 19 08:11:09.329109 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 08:11:09.550216 sshd[5685]: Connection closed by 10.0.0.1 port 48930 Aug 19 08:11:09.550550 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Aug 19 08:11:09.555647 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:48930.service: Deactivated successfully. Aug 19 08:11:09.558213 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 08:11:09.559085 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Aug 19 08:11:09.560435 systemd-logind[1555]: Removed session 25.