Sep 12 06:03:34.826209 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 04:02:32 -00 2025 Sep 12 06:03:34.826234 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 06:03:34.826243 kernel: BIOS-provided physical RAM map: Sep 12 06:03:34.826250 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 06:03:34.826256 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 06:03:34.826263 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 06:03:34.826270 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 06:03:34.826277 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 06:03:34.826288 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 06:03:34.826295 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 06:03:34.826301 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 06:03:34.826308 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 06:03:34.826314 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 06:03:34.826321 kernel: NX (Execute Disable) protection: active Sep 12 06:03:34.826331 kernel: APIC: Static calls initialized Sep 12 06:03:34.826338 kernel: SMBIOS 2.8 present. Sep 12 06:03:34.826348 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 06:03:34.826418 kernel: DMI: Memory slots populated: 1/1 Sep 12 06:03:34.826426 kernel: Hypervisor detected: KVM Sep 12 06:03:34.826433 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 06:03:34.826440 kernel: kvm-clock: using sched offset of 4994996916 cycles Sep 12 06:03:34.826448 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 06:03:34.826456 kernel: tsc: Detected 2794.748 MHz processor Sep 12 06:03:34.826466 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 06:03:34.826474 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 06:03:34.826481 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 06:03:34.826488 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 06:03:34.826496 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 06:03:34.826503 kernel: Using GB pages for direct mapping Sep 12 06:03:34.826510 kernel: ACPI: Early table checksum verification disabled Sep 12 06:03:34.826517 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 06:03:34.826525 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 06:03:34.826534 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 06:03:34.826541 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 06:03:34.826548 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 06:03:34.826556 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 06:03:34.826563 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 06:03:34.826571 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 06:03:34.826580 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 06:03:34.826588 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 06:03:34.826602 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 06:03:34.826610 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 06:03:34.826617 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 06:03:34.826625 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 06:03:34.826632 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 06:03:34.826640 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 06:03:34.826650 kernel: No NUMA configuration found Sep 12 06:03:34.826657 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 06:03:34.826665 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 12 06:03:34.826672 kernel: Zone ranges: Sep 12 06:03:34.826679 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 06:03:34.826687 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 06:03:34.826694 kernel: Normal empty Sep 12 06:03:34.826702 kernel: Device empty Sep 12 06:03:34.826709 kernel: Movable zone start for each node Sep 12 06:03:34.826716 kernel: Early memory node ranges Sep 12 06:03:34.826726 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 06:03:34.826733 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 06:03:34.826741 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 06:03:34.826748 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 06:03:34.826756 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 06:03:34.826767 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 06:03:34.826774 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 06:03:34.826784 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 06:03:34.826791 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 06:03:34.826801 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 06:03:34.826808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 06:03:34.826818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 06:03:34.826826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 06:03:34.826833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 06:03:34.826841 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 06:03:34.826848 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 06:03:34.826856 kernel: TSC deadline timer available Sep 12 06:03:34.826863 kernel: CPU topo: Max. logical packages: 1 Sep 12 06:03:34.826873 kernel: CPU topo: Max. logical dies: 1 Sep 12 06:03:34.826880 kernel: CPU topo: Max. dies per package: 1 Sep 12 06:03:34.826887 kernel: CPU topo: Max. threads per core: 1 Sep 12 06:03:34.826895 kernel: CPU topo: Num. cores per package: 4 Sep 12 06:03:34.826902 kernel: CPU topo: Num. threads per package: 4 Sep 12 06:03:34.826909 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 06:03:34.826917 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 06:03:34.826924 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 06:03:34.826932 kernel: kvm-guest: setup PV sched yield Sep 12 06:03:34.826941 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 06:03:34.826948 kernel: Booting paravirtualized kernel on KVM Sep 12 06:03:34.826957 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 06:03:34.826964 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 06:03:34.826972 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 06:03:34.826979 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 06:03:34.826986 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 06:03:34.826994 kernel: kvm-guest: PV spinlocks enabled Sep 12 06:03:34.827001 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 06:03:34.827012 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 06:03:34.827020 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 06:03:34.827027 kernel: random: crng init done Sep 12 06:03:34.827035 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 06:03:34.827042 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 06:03:34.827050 kernel: Fallback order for Node 0: 0 Sep 12 06:03:34.827057 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 12 06:03:34.827064 kernel: Policy zone: DMA32 Sep 12 06:03:34.827074 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 06:03:34.827081 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 06:03:34.827089 kernel: ftrace: allocating 40123 entries in 157 pages Sep 12 06:03:34.827096 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 06:03:34.827104 kernel: Dynamic Preempt: voluntary Sep 12 06:03:34.827111 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 06:03:34.827119 kernel: rcu: RCU event tracing is enabled. Sep 12 06:03:34.827127 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 06:03:34.827135 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 06:03:34.827147 kernel: Rude variant of Tasks RCU enabled. Sep 12 06:03:34.827154 kernel: Tracing variant of Tasks RCU enabled. Sep 12 06:03:34.827162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 06:03:34.827169 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 06:03:34.827177 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 06:03:34.827184 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 06:03:34.827192 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 06:03:34.827199 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 06:03:34.827207 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 06:03:34.827223 kernel: Console: colour VGA+ 80x25 Sep 12 06:03:34.827231 kernel: printk: legacy console [ttyS0] enabled Sep 12 06:03:34.827239 kernel: ACPI: Core revision 20240827 Sep 12 06:03:34.827249 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 06:03:34.827257 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 06:03:34.827277 kernel: x2apic enabled Sep 12 06:03:34.827285 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 06:03:34.827295 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 06:03:34.827303 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 06:03:34.827314 kernel: kvm-guest: setup PV IPIs Sep 12 06:03:34.827322 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 06:03:34.827330 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 06:03:34.827338 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 06:03:34.827346 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 06:03:34.827369 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 06:03:34.827377 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 06:03:34.827385 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 06:03:34.827395 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 06:03:34.827410 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 06:03:34.827418 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 06:03:34.827426 kernel: active return thunk: retbleed_return_thunk Sep 12 06:03:34.827433 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 06:03:34.827441 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 06:03:34.827450 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 06:03:34.827458 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 06:03:34.827469 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 06:03:34.827477 kernel: active return thunk: srso_return_thunk Sep 12 06:03:34.827484 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 06:03:34.827492 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 06:03:34.827500 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 06:03:34.827508 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 06:03:34.827516 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 06:03:34.827524 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 06:03:34.827531 kernel: Freeing SMP alternatives memory: 32K Sep 12 06:03:34.827542 kernel: pid_max: default: 32768 minimum: 301 Sep 12 06:03:34.827549 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 06:03:34.827557 kernel: landlock: Up and running. Sep 12 06:03:34.827565 kernel: SELinux: Initializing. Sep 12 06:03:34.827575 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 06:03:34.827583 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 06:03:34.827591 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 06:03:34.827599 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 06:03:34.827607 kernel: ... version: 0 Sep 12 06:03:34.827617 kernel: ... bit width: 48 Sep 12 06:03:34.827625 kernel: ... generic registers: 6 Sep 12 06:03:34.827632 kernel: ... value mask: 0000ffffffffffff Sep 12 06:03:34.827640 kernel: ... max period: 00007fffffffffff Sep 12 06:03:34.827648 kernel: ... fixed-purpose events: 0 Sep 12 06:03:34.827655 kernel: ... event mask: 000000000000003f Sep 12 06:03:34.827663 kernel: signal: max sigframe size: 1776 Sep 12 06:03:34.827671 kernel: rcu: Hierarchical SRCU implementation. Sep 12 06:03:34.827679 kernel: rcu: Max phase no-delay instances is 400. Sep 12 06:03:34.827689 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 06:03:34.827707 kernel: smp: Bringing up secondary CPUs ... Sep 12 06:03:34.827724 kernel: smpboot: x86: Booting SMP configuration: Sep 12 06:03:34.827741 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 06:03:34.827764 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 06:03:34.827773 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 06:03:34.827781 kernel: Memory: 2428920K/2571752K available (14336K kernel code, 2432K rwdata, 9988K rodata, 54092K init, 2872K bss, 136904K reserved, 0K cma-reserved) Sep 12 06:03:34.827789 kernel: devtmpfs: initialized Sep 12 06:03:34.827797 kernel: x86/mm: Memory block size: 128MB Sep 12 06:03:34.827808 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 06:03:34.827831 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 06:03:34.827842 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 06:03:34.827852 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 06:03:34.827860 kernel: audit: initializing netlink subsys (disabled) Sep 12 06:03:34.827868 kernel: audit: type=2000 audit(1757657012.103:1): state=initialized audit_enabled=0 res=1 Sep 12 06:03:34.827875 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 06:03:34.827883 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 06:03:34.827891 kernel: cpuidle: using governor menu Sep 12 06:03:34.827902 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 06:03:34.827909 kernel: dca service started, version 1.12.1 Sep 12 06:03:34.827918 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 12 06:03:34.827925 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 06:03:34.827934 kernel: PCI: Using configuration type 1 for base access Sep 12 06:03:34.827941 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 06:03:34.827949 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 06:03:34.827957 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 06:03:34.827965 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 06:03:34.827975 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 06:03:34.827983 kernel: ACPI: Added _OSI(Module Device) Sep 12 06:03:34.827990 kernel: ACPI: Added _OSI(Processor Device) Sep 12 06:03:34.827998 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 06:03:34.828006 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 06:03:34.828016 kernel: ACPI: Interpreter enabled Sep 12 06:03:34.828024 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 06:03:34.828032 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 06:03:34.828039 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 06:03:34.828049 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 06:03:34.828057 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 06:03:34.828065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 06:03:34.828278 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 06:03:34.828432 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 06:03:34.828556 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 06:03:34.828566 kernel: PCI host bridge to bus 0000:00 Sep 12 06:03:34.828705 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 06:03:34.828826 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 06:03:34.828937 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 06:03:34.829046 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 06:03:34.829155 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 06:03:34.829263 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 06:03:34.829395 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 06:03:34.829582 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 06:03:34.829722 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 06:03:34.829843 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 12 06:03:34.829962 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 12 06:03:34.830081 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 12 06:03:34.830200 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 06:03:34.830340 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 06:03:34.830524 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 12 06:03:34.830698 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 12 06:03:34.830869 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 06:03:34.831012 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 06:03:34.831134 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 12 06:03:34.831254 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 12 06:03:34.831406 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 06:03:34.831554 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 06:03:34.831682 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 12 06:03:34.831801 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 12 06:03:34.831920 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 06:03:34.832039 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 12 06:03:34.832222 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 06:03:34.832427 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 06:03:34.832566 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 06:03:34.832687 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 12 06:03:34.832806 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 12 06:03:34.832950 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 06:03:34.833072 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 12 06:03:34.833082 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 06:03:34.833095 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 06:03:34.833103 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 06:03:34.833111 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 06:03:34.833119 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 06:03:34.833127 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 06:03:34.833134 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 06:03:34.833142 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 06:03:34.833150 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 06:03:34.833158 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 06:03:34.833169 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 06:03:34.833176 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 06:03:34.833184 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 06:03:34.833205 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 06:03:34.833222 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 06:03:34.833231 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 06:03:34.833239 kernel: iommu: Default domain type: Translated Sep 12 06:03:34.833247 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 06:03:34.833254 kernel: PCI: Using ACPI for IRQ routing Sep 12 06:03:34.833266 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 06:03:34.833274 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 06:03:34.833282 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 06:03:34.833441 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 06:03:34.833564 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 06:03:34.833682 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 06:03:34.833692 kernel: vgaarb: loaded Sep 12 06:03:34.833701 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 06:03:34.833713 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 06:03:34.833721 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 06:03:34.833729 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 06:03:34.833737 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 06:03:34.833745 kernel: pnp: PnP ACPI init Sep 12 06:03:34.833902 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 06:03:34.833915 kernel: pnp: PnP ACPI: found 6 devices Sep 12 06:03:34.833923 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 06:03:34.833934 kernel: NET: Registered PF_INET protocol family Sep 12 06:03:34.833942 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 06:03:34.833950 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 06:03:34.833958 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 06:03:34.833966 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 06:03:34.833974 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 06:03:34.833981 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 06:03:34.833989 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 06:03:34.833997 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 06:03:34.834007 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 06:03:34.834015 kernel: NET: Registered PF_XDP protocol family Sep 12 06:03:34.834128 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 06:03:34.834239 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 06:03:34.834348 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 06:03:34.834489 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 06:03:34.834603 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 06:03:34.834714 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 06:03:34.834727 kernel: PCI: CLS 0 bytes, default 64 Sep 12 06:03:34.834736 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 06:03:34.834744 kernel: Initialise system trusted keyrings Sep 12 06:03:34.834753 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 06:03:34.834761 kernel: Key type asymmetric registered Sep 12 06:03:34.834769 kernel: Asymmetric key parser 'x509' registered Sep 12 06:03:34.834776 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 06:03:34.834784 kernel: io scheduler mq-deadline registered Sep 12 06:03:34.834792 kernel: io scheduler kyber registered Sep 12 06:03:34.834802 kernel: io scheduler bfq registered Sep 12 06:03:34.834810 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 06:03:34.834818 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 06:03:34.834826 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 06:03:34.834834 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 06:03:34.834842 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 06:03:34.834850 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 06:03:34.834858 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 06:03:34.834866 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 06:03:34.834875 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 06:03:34.834883 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 06:03:34.835026 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 06:03:34.835141 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 06:03:34.835265 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T06:03:34 UTC (1757657014) Sep 12 06:03:34.835406 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 06:03:34.835418 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 06:03:34.835426 kernel: NET: Registered PF_INET6 protocol family Sep 12 06:03:34.835438 kernel: Segment Routing with IPv6 Sep 12 06:03:34.835446 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 06:03:34.835454 kernel: NET: Registered PF_PACKET protocol family Sep 12 06:03:34.835461 kernel: Key type dns_resolver registered Sep 12 06:03:34.835470 kernel: IPI shorthand broadcast: enabled Sep 12 06:03:34.835478 kernel: sched_clock: Marking stable (2981002784, 108627619)->(3107367734, -17737331) Sep 12 06:03:34.835486 kernel: registered taskstats version 1 Sep 12 06:03:34.835494 kernel: Loading compiled-in X.509 certificates Sep 12 06:03:34.835502 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: c974434132f0296e0aaf9b1358c8dc50eba5c8b9' Sep 12 06:03:34.835512 kernel: Demotion targets for Node 0: null Sep 12 06:03:34.835519 kernel: Key type .fscrypt registered Sep 12 06:03:34.835527 kernel: Key type fscrypt-provisioning registered Sep 12 06:03:34.835535 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 06:03:34.835543 kernel: ima: Allocated hash algorithm: sha1 Sep 12 06:03:34.835550 kernel: ima: No architecture policies found Sep 12 06:03:34.835558 kernel: clk: Disabling unused clocks Sep 12 06:03:34.835566 kernel: Warning: unable to open an initial console. Sep 12 06:03:34.835574 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 12 06:03:34.835584 kernel: Write protecting the kernel read-only data: 24576k Sep 12 06:03:34.835592 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 12 06:03:34.835600 kernel: Run /init as init process Sep 12 06:03:34.835608 kernel: with arguments: Sep 12 06:03:34.835615 kernel: /init Sep 12 06:03:34.835623 kernel: with environment: Sep 12 06:03:34.835631 kernel: HOME=/ Sep 12 06:03:34.835638 kernel: TERM=linux Sep 12 06:03:34.835646 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 06:03:34.835660 systemd[1]: Successfully made /usr/ read-only. Sep 12 06:03:34.835682 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 06:03:34.835693 systemd[1]: Detected virtualization kvm. Sep 12 06:03:34.835701 systemd[1]: Detected architecture x86-64. Sep 12 06:03:34.835710 systemd[1]: Running in initrd. Sep 12 06:03:34.835720 systemd[1]: No hostname configured, using default hostname. Sep 12 06:03:34.835728 systemd[1]: Hostname set to . Sep 12 06:03:34.835737 systemd[1]: Initializing machine ID from VM UUID. Sep 12 06:03:34.835745 systemd[1]: Queued start job for default target initrd.target. Sep 12 06:03:34.835754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 06:03:34.835764 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 06:03:34.835774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 06:03:34.835782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 06:03:34.835793 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 06:03:34.835802 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 06:03:34.835812 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 06:03:34.835821 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 06:03:34.835829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 06:03:34.835838 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 06:03:34.835846 systemd[1]: Reached target paths.target - Path Units. Sep 12 06:03:34.835857 systemd[1]: Reached target slices.target - Slice Units. Sep 12 06:03:34.835865 systemd[1]: Reached target swap.target - Swaps. Sep 12 06:03:34.835874 systemd[1]: Reached target timers.target - Timer Units. Sep 12 06:03:34.835882 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 06:03:34.835891 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 06:03:34.835899 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 06:03:34.835908 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 06:03:34.835916 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 06:03:34.835925 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 06:03:34.835936 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 06:03:34.835944 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 06:03:34.835953 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 06:03:34.835961 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 06:03:34.835972 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 06:03:34.835983 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 06:03:34.835992 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 06:03:34.836001 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 06:03:34.836009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 06:03:34.836018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 06:03:34.836026 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 06:03:34.836037 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 06:03:34.836077 systemd-journald[220]: Collecting audit messages is disabled. Sep 12 06:03:34.836099 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 06:03:34.836109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 06:03:34.836118 systemd-journald[220]: Journal started Sep 12 06:03:34.836139 systemd-journald[220]: Runtime Journal (/run/log/journal/e62396e84a2f4990ae9ebdda04d8d458) is 6M, max 48.6M, 42.5M free. Sep 12 06:03:34.830887 systemd-modules-load[221]: Inserted module 'overlay' Sep 12 06:03:34.838376 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 06:03:34.843479 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 06:03:34.876213 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 06:03:34.876238 kernel: Bridge firewalling registered Sep 12 06:03:34.863017 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 12 06:03:34.883477 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 06:03:34.886089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 06:03:34.888508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 06:03:34.889914 systemd-tmpfiles[236]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 06:03:34.894989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 06:03:34.896929 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 06:03:34.898489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 06:03:34.901810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 06:03:34.917778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 06:03:34.918292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 06:03:34.920831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 06:03:34.934618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 06:03:34.936864 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 06:03:34.967688 systemd-resolved[251]: Positive Trust Anchors: Sep 12 06:03:34.967706 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 06:03:34.967735 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 06:03:34.972046 systemd-resolved[251]: Defaulting to hostname 'linux'. Sep 12 06:03:34.980158 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 06:03:34.973575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 06:03:34.977734 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 06:03:35.097418 kernel: SCSI subsystem initialized Sep 12 06:03:35.106386 kernel: Loading iSCSI transport class v2.0-870. Sep 12 06:03:35.116389 kernel: iscsi: registered transport (tcp) Sep 12 06:03:35.137396 kernel: iscsi: registered transport (qla4xxx) Sep 12 06:03:35.137420 kernel: QLogic iSCSI HBA Driver Sep 12 06:03:35.158195 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 06:03:35.176045 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 06:03:35.178282 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 06:03:35.233038 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 06:03:35.235839 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 06:03:35.295393 kernel: raid6: avx2x4 gen() 30676 MB/s Sep 12 06:03:35.312377 kernel: raid6: avx2x2 gen() 31519 MB/s Sep 12 06:03:35.329407 kernel: raid6: avx2x1 gen() 26037 MB/s Sep 12 06:03:35.329424 kernel: raid6: using algorithm avx2x2 gen() 31519 MB/s Sep 12 06:03:35.347411 kernel: raid6: .... xor() 19969 MB/s, rmw enabled Sep 12 06:03:35.347428 kernel: raid6: using avx2x2 recovery algorithm Sep 12 06:03:35.367392 kernel: xor: automatically using best checksumming function avx Sep 12 06:03:35.532399 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 06:03:35.540459 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 06:03:35.543589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 06:03:35.585455 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 12 06:03:35.592074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 06:03:35.594194 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 06:03:35.618189 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 12 06:03:35.647154 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 06:03:35.648972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 06:03:35.722138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 06:03:35.724731 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 06:03:35.756387 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 06:03:35.760087 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 06:03:35.762733 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 06:03:35.762761 kernel: GPT:9289727 != 19775487 Sep 12 06:03:35.762773 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 06:03:35.764046 kernel: GPT:9289727 != 19775487 Sep 12 06:03:35.764064 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 06:03:35.765868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 06:03:35.770431 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 06:03:35.775390 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 06:03:35.785395 kernel: libata version 3.00 loaded. Sep 12 06:03:35.790586 kernel: AES CTR mode by8 optimization enabled Sep 12 06:03:35.808194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 06:03:35.811243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 06:03:35.815078 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 06:03:35.820753 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 06:03:35.813750 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 06:03:35.822109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 06:03:35.825217 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 06:03:35.835458 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 06:03:35.835662 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 06:03:35.835815 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 06:03:35.846520 kernel: scsi host0: ahci Sep 12 06:03:35.849583 kernel: scsi host1: ahci Sep 12 06:03:35.849810 kernel: scsi host2: ahci Sep 12 06:03:35.851420 kernel: scsi host3: ahci Sep 12 06:03:35.851573 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 06:03:35.853380 kernel: scsi host4: ahci Sep 12 06:03:35.853552 kernel: scsi host5: ahci Sep 12 06:03:35.855263 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 12 06:03:35.855285 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 12 06:03:35.858379 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 12 06:03:35.858404 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 12 06:03:35.858415 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 12 06:03:35.859576 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 12 06:03:35.860000 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 06:03:35.894587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 06:03:35.915386 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 06:03:35.922428 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 06:03:35.922668 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 06:03:35.923796 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 06:03:35.953016 disk-uuid[637]: Primary Header is updated. Sep 12 06:03:35.953016 disk-uuid[637]: Secondary Entries is updated. Sep 12 06:03:35.953016 disk-uuid[637]: Secondary Header is updated. Sep 12 06:03:35.956409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 06:03:35.961382 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 06:03:36.168474 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 06:03:36.168526 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 06:03:36.169388 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 06:03:36.169414 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 06:03:36.170393 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 06:03:36.171397 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 06:03:36.172401 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 06:03:36.172425 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 06:03:36.172720 kernel: ata3.00: applying bridge limits Sep 12 06:03:36.173881 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 06:03:36.173900 kernel: ata3.00: configured for UDMA/100 Sep 12 06:03:36.176396 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 06:03:36.234912 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 06:03:36.235127 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 06:03:36.249444 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 06:03:36.676391 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 06:03:36.678082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 06:03:36.679665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 06:03:36.680804 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 06:03:36.683762 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 06:03:36.722622 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 06:03:36.961403 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 06:03:36.962763 disk-uuid[638]: The operation has completed successfully. Sep 12 06:03:36.989884 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 06:03:36.990007 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 06:03:37.027375 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 06:03:37.053719 sh[667]: Success Sep 12 06:03:37.071400 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 06:03:37.071431 kernel: device-mapper: uevent: version 1.0.3 Sep 12 06:03:37.071443 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 06:03:37.081376 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 06:03:37.110455 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 06:03:37.112802 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 06:03:37.131343 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 06:03:37.138067 kernel: BTRFS: device fsid 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (679) Sep 12 06:03:37.138098 kernel: BTRFS info (device dm-0): first mount of filesystem 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f Sep 12 06:03:37.138110 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 06:03:37.143439 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 06:03:37.143459 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 06:03:37.144703 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 06:03:37.146829 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 06:03:37.148976 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 06:03:37.151597 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 06:03:37.154043 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 06:03:37.182594 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (712) Sep 12 06:03:37.182632 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 06:03:37.182643 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 06:03:37.186412 kernel: BTRFS info (device vda6): turning on async discard Sep 12 06:03:37.186436 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 06:03:37.191451 kernel: BTRFS info (device vda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 06:03:37.192083 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 06:03:37.194976 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 06:03:37.278475 ignition[755]: Ignition 2.22.0 Sep 12 06:03:37.278487 ignition[755]: Stage: fetch-offline Sep 12 06:03:37.278555 ignition[755]: no configs at "/usr/lib/ignition/base.d" Sep 12 06:03:37.278565 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 06:03:37.278682 ignition[755]: parsed url from cmdline: "" Sep 12 06:03:37.278687 ignition[755]: no config URL provided Sep 12 06:03:37.278692 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 06:03:37.278703 ignition[755]: no config at "/usr/lib/ignition/user.ign" Sep 12 06:03:37.278753 ignition[755]: op(1): [started] loading QEMU firmware config module Sep 12 06:03:37.278758 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 06:03:37.289095 ignition[755]: op(1): [finished] loading QEMU firmware config module Sep 12 06:03:37.296580 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 06:03:37.300463 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 06:03:37.330208 ignition[755]: parsing config with SHA512: 59b8892bab238bc6480b4d128e3040deaff4fe09a37d6e0234678d2d9befc51c7dbedb057e5b0c4e85a1d9b732454df8a2cc73a2019f9caf70056eba8688bfa9 Sep 12 06:03:37.334526 unknown[755]: fetched base config from "system" Sep 12 06:03:37.334536 unknown[755]: fetched user config from "qemu" Sep 12 06:03:37.336389 ignition[755]: fetch-offline: fetch-offline passed Sep 12 06:03:37.337302 ignition[755]: Ignition finished successfully Sep 12 06:03:37.340678 systemd-networkd[859]: lo: Link UP Sep 12 06:03:37.340682 systemd-networkd[859]: lo: Gained carrier Sep 12 06:03:37.341597 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 06:03:37.342254 systemd-networkd[859]: Enumeration completed Sep 12 06:03:37.342683 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 06:03:37.342688 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 06:03:37.342842 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 06:03:37.344431 systemd-networkd[859]: eth0: Link UP Sep 12 06:03:37.344586 systemd-networkd[859]: eth0: Gained carrier Sep 12 06:03:37.344595 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 06:03:37.346199 systemd[1]: Reached target network.target - Network. Sep 12 06:03:37.347706 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 06:03:37.348510 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 06:03:37.365420 systemd-networkd[859]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 06:03:37.382631 ignition[863]: Ignition 2.22.0 Sep 12 06:03:37.382645 ignition[863]: Stage: kargs Sep 12 06:03:37.382767 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 12 06:03:37.382778 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 06:03:37.383435 ignition[863]: kargs: kargs passed Sep 12 06:03:37.383481 ignition[863]: Ignition finished successfully Sep 12 06:03:37.389977 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 06:03:37.391139 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 06:03:37.420779 ignition[872]: Ignition 2.22.0 Sep 12 06:03:37.420793 ignition[872]: Stage: disks Sep 12 06:03:37.420917 ignition[872]: no configs at "/usr/lib/ignition/base.d" Sep 12 06:03:37.420928 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 06:03:37.422343 ignition[872]: disks: disks passed Sep 12 06:03:37.422422 ignition[872]: Ignition finished successfully Sep 12 06:03:37.425837 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 06:03:37.427915 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 06:03:37.428199 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 06:03:37.430216 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 06:03:37.432608 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 06:03:37.432915 systemd[1]: Reached target basic.target - Basic System. Sep 12 06:03:37.434221 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 06:03:37.463366 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 06:03:37.470730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 06:03:37.473387 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 06:03:37.572387 kernel: EXT4-fs (vda9): mounted filesystem 2b8062f9-897a-46cb-bde4-2b62ba4cc712 r/w with ordered data mode. Quota mode: none. Sep 12 06:03:37.573261 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 06:03:37.574624 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 06:03:37.577086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 06:03:37.578692 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 06:03:37.579746 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 06:03:37.579792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 06:03:37.579815 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 06:03:37.598617 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 06:03:37.600320 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 06:03:37.604386 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Sep 12 06:03:37.604412 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 06:03:37.606159 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 06:03:37.609606 kernel: BTRFS info (device vda6): turning on async discard Sep 12 06:03:37.609630 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 06:03:37.611689 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 06:03:37.637230 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 06:03:37.642139 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Sep 12 06:03:37.646794 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 06:03:37.651085 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 06:03:37.741432 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 06:03:37.743664 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 06:03:37.745239 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 06:03:37.761391 kernel: BTRFS info (device vda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 06:03:37.772823 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 06:03:37.790550 ignition[1004]: INFO : Ignition 2.22.0 Sep 12 06:03:37.790550 ignition[1004]: INFO : Stage: mount Sep 12 06:03:37.792195 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 06:03:37.792195 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 06:03:37.792195 ignition[1004]: INFO : mount: mount passed Sep 12 06:03:37.792195 ignition[1004]: INFO : Ignition finished successfully Sep 12 06:03:37.795136 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 06:03:37.797991 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 06:03:38.137085 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 06:03:38.138649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 06:03:38.170646 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Sep 12 06:03:38.170677 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 06:03:38.170688 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 06:03:38.174472 kernel: BTRFS info (device vda6): turning on async discard Sep 12 06:03:38.174494 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 06:03:38.176228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 06:03:38.222760 ignition[1034]: INFO : Ignition 2.22.0 Sep 12 06:03:38.222760 ignition[1034]: INFO : Stage: files Sep 12 06:03:38.224814 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 06:03:38.224814 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 06:03:38.224814 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Sep 12 06:03:38.228229 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 06:03:38.228229 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 06:03:38.228229 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 06:03:38.228229 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 06:03:38.228229 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 06:03:38.227856 unknown[1034]: wrote ssh authorized keys file for user: core Sep 12 06:03:38.236248 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 06:03:38.236248 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 06:03:38.274589 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 06:03:38.483066 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 06:03:38.483066 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 06:03:38.486847 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 06:03:38.486847 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 06:03:38.486847 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 06:03:38.486847 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 06:03:38.486847 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 06:03:38.486847 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 06:03:38.486847 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 06:03:38.498998 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 06:03:38.498998 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 06:03:38.498998 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 06:03:38.498998 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 06:03:38.498998 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 06:03:38.498998 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 06:03:38.969224 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 06:03:39.299300 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 06:03:39.299300 ignition[1034]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 06:03:39.303345 ignition[1034]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 06:03:39.305310 ignition[1034]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 06:03:39.305310 ignition[1034]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 06:03:39.305310 ignition[1034]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 06:03:39.305310 ignition[1034]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 06:03:39.305310 ignition[1034]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 06:03:39.305310 ignition[1034]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 06:03:39.305310 ignition[1034]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 06:03:39.323305 ignition[1034]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 06:03:39.328241 ignition[1034]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 06:03:39.330045 ignition[1034]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 06:03:39.330045 ignition[1034]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 06:03:39.332834 ignition[1034]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 06:03:39.332834 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 06:03:39.332834 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 06:03:39.332834 ignition[1034]: INFO : files: files passed Sep 12 06:03:39.332834 ignition[1034]: INFO : Ignition finished successfully Sep 12 06:03:39.338571 systemd-networkd[859]: eth0: Gained IPv6LL Sep 12 06:03:39.338875 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 06:03:39.341247 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 06:03:39.342544 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 06:03:39.360804 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 06:03:39.360928 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 06:03:39.364639 initrd-setup-root-after-ignition[1064]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 06:03:39.368096 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 06:03:39.369730 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 06:03:39.371208 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 06:03:39.374534 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 06:03:39.377090 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 06:03:39.378386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 06:03:39.459939 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 06:03:39.460075 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 06:03:39.460777 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 06:03:39.463562 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 06:03:39.463925 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 06:03:39.468319 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 06:03:39.506931 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 06:03:39.510576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 06:03:39.532991 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 06:03:39.535291 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 06:03:39.536575 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 06:03:39.538439 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 06:03:39.538599 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 06:03:39.540586 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 06:03:39.542201 systemd[1]: Stopped target basic.target - Basic System. Sep 12 06:03:39.544132 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 06:03:39.546066 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 06:03:39.547999 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 06:03:39.550067 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 06:03:39.552205 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 06:03:39.554185 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 06:03:39.556392 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 06:03:39.558295 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 06:03:39.560391 systemd[1]: Stopped target swap.target - Swaps. Sep 12 06:03:39.562087 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 06:03:39.562247 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 06:03:39.564329 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 06:03:39.565888 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 06:03:39.567915 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 06:03:39.568020 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 06:03:39.570019 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 06:03:39.570138 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 06:03:39.572232 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 06:03:39.572349 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 06:03:39.574246 systemd[1]: Stopped target paths.target - Path Units. Sep 12 06:03:39.575882 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 06:03:39.579459 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 06:03:39.581312 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 06:03:39.583247 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 06:03:39.584988 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 06:03:39.585135 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 06:03:39.586946 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 06:03:39.587062 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 06:03:39.589377 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 06:03:39.589569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 06:03:39.591373 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 06:03:39.591542 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 06:03:39.594337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 06:03:39.595592 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 06:03:39.595766 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 06:03:39.598749 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 06:03:39.600466 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 06:03:39.600653 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 06:03:39.602894 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 06:03:39.603096 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 06:03:39.610870 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 06:03:39.610996 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 06:03:39.620480 ignition[1090]: INFO : Ignition 2.22.0 Sep 12 06:03:39.620480 ignition[1090]: INFO : Stage: umount Sep 12 06:03:39.622087 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 06:03:39.622087 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 06:03:39.622087 ignition[1090]: INFO : umount: umount passed Sep 12 06:03:39.622087 ignition[1090]: INFO : Ignition finished successfully Sep 12 06:03:39.623792 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 06:03:39.623924 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 06:03:39.626247 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 06:03:39.626761 systemd[1]: Stopped target network.target - Network. Sep 12 06:03:39.627731 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 06:03:39.627786 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 06:03:39.629588 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 06:03:39.629638 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 06:03:39.630821 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 06:03:39.630870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 06:03:39.631132 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 06:03:39.631173 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 06:03:39.632347 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 06:03:39.636816 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 06:03:39.644888 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 06:03:39.645078 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 06:03:39.650697 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 06:03:39.651022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 06:03:39.651081 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 06:03:39.654871 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 06:03:39.657588 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 06:03:39.657719 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 06:03:39.661299 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 06:03:39.661484 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 06:03:39.661920 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 06:03:39.661956 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 06:03:39.666445 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 06:03:39.669046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 06:03:39.670096 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 06:03:39.672584 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 06:03:39.672650 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 06:03:39.675730 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 06:03:39.675800 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 06:03:39.676220 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 06:03:39.677628 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 06:03:39.700813 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 06:03:39.701003 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 06:03:39.701863 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 06:03:39.701957 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 06:03:39.704300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 06:03:39.704339 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 06:03:39.706428 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 06:03:39.706490 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 06:03:39.709669 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 06:03:39.709719 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 06:03:39.713247 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 06:03:39.713310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 06:03:39.716916 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 06:03:39.718105 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 06:03:39.718166 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 06:03:39.721536 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 06:03:39.721585 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 06:03:39.724888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 06:03:39.724945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 06:03:39.728681 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 06:03:39.738545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 06:03:39.746478 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 06:03:39.746604 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 06:03:39.786778 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 06:03:39.786923 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 06:03:39.787574 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 06:03:39.787781 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 06:03:39.787833 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 06:03:39.788920 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 06:03:39.811096 systemd[1]: Switching root. Sep 12 06:03:39.855024 systemd-journald[220]: Journal stopped Sep 12 06:03:41.050477 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 12 06:03:41.050554 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 06:03:41.050569 kernel: SELinux: policy capability open_perms=1 Sep 12 06:03:41.050593 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 06:03:41.050605 kernel: SELinux: policy capability always_check_network=0 Sep 12 06:03:41.050616 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 06:03:41.050628 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 06:03:41.050639 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 06:03:41.050650 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 06:03:41.050665 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 06:03:41.050689 kernel: audit: type=1403 audit(1757657020.252:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 06:03:41.050703 systemd[1]: Successfully loaded SELinux policy in 63.125ms. Sep 12 06:03:41.050724 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.779ms. Sep 12 06:03:41.050738 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 06:03:41.050750 systemd[1]: Detected virtualization kvm. Sep 12 06:03:41.050762 systemd[1]: Detected architecture x86-64. Sep 12 06:03:41.050774 systemd[1]: Detected first boot. Sep 12 06:03:41.050786 systemd[1]: Initializing machine ID from VM UUID. Sep 12 06:03:41.050800 zram_generator::config[1135]: No configuration found. Sep 12 06:03:41.050815 kernel: Guest personality initialized and is inactive Sep 12 06:03:41.050826 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 06:03:41.050837 kernel: Initialized host personality Sep 12 06:03:41.050855 kernel: NET: Registered PF_VSOCK protocol family Sep 12 06:03:41.050867 systemd[1]: Populated /etc with preset unit settings. Sep 12 06:03:41.050880 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 06:03:41.050891 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 06:03:41.050906 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 06:03:41.050918 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 06:03:41.050931 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 06:03:41.050944 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 06:03:41.050956 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 06:03:41.050978 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 06:03:41.050990 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 06:03:41.051002 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 06:03:41.051017 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 06:03:41.051029 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 06:03:41.051041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 06:03:41.051053 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 06:03:41.051066 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 06:03:41.051078 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 06:03:41.051095 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 06:03:41.051107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 06:03:41.051122 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 06:03:41.051135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 06:03:41.051147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 06:03:41.051159 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 06:03:41.051171 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 06:03:41.051184 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 06:03:41.051196 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 06:03:41.051208 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 06:03:41.051220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 06:03:41.051253 systemd[1]: Reached target slices.target - Slice Units. Sep 12 06:03:41.051266 systemd[1]: Reached target swap.target - Swaps. Sep 12 06:03:41.051278 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 06:03:41.051290 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 06:03:41.051302 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 06:03:41.051314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 06:03:41.051326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 06:03:41.051338 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 06:03:41.051350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 06:03:41.051376 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 06:03:41.051395 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 06:03:41.051408 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 06:03:41.051420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 06:03:41.051432 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 06:03:41.051444 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 06:03:41.051456 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 06:03:41.051469 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 06:03:41.051481 systemd[1]: Reached target machines.target - Containers. Sep 12 06:03:41.051496 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 06:03:41.051508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 06:03:41.051520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 06:03:41.051532 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 06:03:41.051545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 06:03:41.051558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 06:03:41.051570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 06:03:41.051582 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 06:03:41.051596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 06:03:41.051609 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 06:03:41.051621 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 06:03:41.051633 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 06:03:41.051645 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 06:03:41.051661 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 06:03:41.051674 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 06:03:41.051686 kernel: fuse: init (API version 7.41) Sep 12 06:03:41.051698 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 06:03:41.051712 kernel: ACPI: bus type drm_connector registered Sep 12 06:03:41.051723 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 06:03:41.051735 kernel: loop: module loaded Sep 12 06:03:41.051747 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 06:03:41.051780 systemd-journald[1206]: Collecting audit messages is disabled. Sep 12 06:03:41.051803 systemd-journald[1206]: Journal started Sep 12 06:03:41.051828 systemd-journald[1206]: Runtime Journal (/run/log/journal/e62396e84a2f4990ae9ebdda04d8d458) is 6M, max 48.6M, 42.5M free. Sep 12 06:03:40.781062 systemd[1]: Queued start job for default target multi-user.target. Sep 12 06:03:40.805337 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 06:03:40.805838 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 06:03:41.054486 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 06:03:41.057379 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 06:03:41.064388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 06:03:41.066419 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 06:03:41.067419 systemd[1]: Stopped verity-setup.service. Sep 12 06:03:41.069383 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 06:03:41.074399 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 06:03:41.075803 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 06:03:41.077298 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 06:03:41.078540 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 06:03:41.079605 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 06:03:41.080815 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 06:03:41.082008 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 06:03:41.083305 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 06:03:41.084799 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 06:03:41.086312 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 06:03:41.086563 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 06:03:41.088071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 06:03:41.088311 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 06:03:41.089719 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 06:03:41.089945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 06:03:41.091281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 06:03:41.091522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 06:03:41.092999 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 06:03:41.093225 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 06:03:41.094596 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 06:03:41.094815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 06:03:41.096219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 06:03:41.097782 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 06:03:41.099330 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 06:03:41.100909 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 06:03:41.116117 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 06:03:41.118705 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 06:03:41.120848 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 06:03:41.121959 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 06:03:41.122056 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 06:03:41.124011 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 06:03:41.135470 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 06:03:41.136912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 06:03:41.139482 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 06:03:41.142134 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 06:03:41.143527 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 06:03:41.144845 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 06:03:41.146012 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 06:03:41.150143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 06:03:41.155651 systemd-journald[1206]: Time spent on flushing to /var/log/journal/e62396e84a2f4990ae9ebdda04d8d458 is 29.497ms for 978 entries. Sep 12 06:03:41.155651 systemd-journald[1206]: System Journal (/var/log/journal/e62396e84a2f4990ae9ebdda04d8d458) is 8M, max 195.6M, 187.6M free. Sep 12 06:03:41.210968 systemd-journald[1206]: Received client request to flush runtime journal. Sep 12 06:03:41.211026 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 06:03:41.153887 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 06:03:41.165745 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 06:03:41.171431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 06:03:41.174513 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 06:03:41.175802 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 06:03:41.178746 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 06:03:41.184663 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 06:03:41.189417 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 06:03:41.200757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 06:03:41.213867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 06:03:41.218384 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 06:03:41.225395 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 06:03:41.226944 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 06:03:41.230601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 06:03:41.240380 kernel: loop1: detected capacity change from 0 to 110984 Sep 12 06:03:41.258073 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 12 06:03:41.258092 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 12 06:03:41.264408 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 06:03:41.268379 kernel: loop2: detected capacity change from 0 to 229808 Sep 12 06:03:41.302381 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 06:03:41.310400 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 06:03:41.321450 kernel: loop5: detected capacity change from 0 to 229808 Sep 12 06:03:41.331019 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 06:03:41.331654 (sd-merge)[1279]: Merged extensions into '/usr'. Sep 12 06:03:41.336048 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 06:03:41.336075 systemd[1]: Reloading... Sep 12 06:03:41.390428 zram_generator::config[1304]: No configuration found. Sep 12 06:03:41.490430 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 06:03:41.590022 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 06:03:41.590657 systemd[1]: Reloading finished in 254 ms. Sep 12 06:03:41.626309 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 06:03:41.627889 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 06:03:41.646899 systemd[1]: Starting ensure-sysext.service... Sep 12 06:03:41.648903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 06:03:41.659145 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Sep 12 06:03:41.659161 systemd[1]: Reloading... Sep 12 06:03:41.678780 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 06:03:41.678820 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 06:03:41.679138 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 06:03:41.679448 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 06:03:41.680392 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 06:03:41.680667 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 12 06:03:41.680736 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 12 06:03:41.685282 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 06:03:41.685385 systemd-tmpfiles[1343]: Skipping /boot Sep 12 06:03:41.698670 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 06:03:41.698743 systemd-tmpfiles[1343]: Skipping /boot Sep 12 06:03:41.708397 zram_generator::config[1368]: No configuration found. Sep 12 06:03:41.893948 systemd[1]: Reloading finished in 234 ms. Sep 12 06:03:41.922328 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 06:03:41.949015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 06:03:41.958050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 06:03:41.960638 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 06:03:41.974275 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 06:03:41.978105 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 06:03:41.982086 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 06:03:41.984941 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 06:03:41.991096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 06:03:41.991337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 06:03:41.999136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 06:03:42.002067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 06:03:42.006740 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 06:03:42.007937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 06:03:42.008114 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 06:03:42.019433 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 06:03:42.020543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 06:03:42.022469 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 06:03:42.024865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 06:03:42.025325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 06:03:42.027789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 06:03:42.028169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 06:03:42.030067 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 06:03:42.030453 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 06:03:42.040678 systemd-udevd[1414]: Using default interface naming scheme 'v255'. Sep 12 06:03:42.043548 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 06:03:42.043746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 06:03:42.045674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 06:03:42.048560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 06:03:42.051674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 06:03:42.052919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 06:03:42.053030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 06:03:42.056399 augenrules[1444]: No rules Sep 12 06:03:42.060655 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 06:03:42.061772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 06:03:42.063378 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 06:03:42.064413 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 06:03:42.065805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 06:03:42.068554 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 06:03:42.070406 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 06:03:42.071590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 06:03:42.071808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 06:03:42.074711 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 06:03:42.081193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 06:03:42.083199 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 06:03:42.084421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 06:03:42.113305 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 06:03:42.118624 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 06:03:42.119873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 06:03:42.122533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 06:03:42.126643 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 06:03:42.129617 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 06:03:42.132522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 06:03:42.134052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 06:03:42.134162 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 06:03:42.147596 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 06:03:42.148673 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 06:03:42.148780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 06:03:42.153071 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 06:03:42.156185 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 06:03:42.157855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 06:03:42.158101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 06:03:42.160921 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 06:03:42.161159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 06:03:42.163025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 06:03:42.164311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 06:03:42.166092 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 06:03:42.166338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 06:03:42.178417 systemd[1]: Finished ensure-sysext.service. Sep 12 06:03:42.186633 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 06:03:42.195772 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 06:03:42.197154 augenrules[1485]: /sbin/augenrules: No change Sep 12 06:03:42.211906 augenrules[1526]: No rules Sep 12 06:03:42.214907 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 06:03:42.215189 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 06:03:42.219392 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 06:03:42.221047 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 06:03:42.222228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 06:03:42.222289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 06:03:42.224098 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 06:03:42.241977 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 06:03:42.244519 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 06:03:42.248415 kernel: ACPI: button: Power Button [PWRF] Sep 12 06:03:42.263391 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 06:03:42.266372 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 06:03:42.338423 systemd-networkd[1495]: lo: Link UP Sep 12 06:03:42.338435 systemd-networkd[1495]: lo: Gained carrier Sep 12 06:03:42.340103 systemd-networkd[1495]: Enumeration completed Sep 12 06:03:42.340515 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 06:03:42.340521 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 06:03:42.340527 systemd-networkd[1495]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 06:03:42.341284 systemd-networkd[1495]: eth0: Link UP Sep 12 06:03:42.341487 systemd-networkd[1495]: eth0: Gained carrier Sep 12 06:03:42.341501 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 06:03:42.343588 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 06:03:42.345959 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 06:03:42.348248 systemd-resolved[1412]: Positive Trust Anchors: Sep 12 06:03:42.348269 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 06:03:42.348300 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 06:03:42.351956 systemd-resolved[1412]: Defaulting to hostname 'linux'. Sep 12 06:03:42.353598 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 06:03:42.354976 systemd[1]: Reached target network.target - Network. Sep 12 06:03:42.355961 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 06:03:42.358480 systemd-networkd[1495]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 06:03:42.360120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 06:03:42.373635 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 06:03:42.430554 kernel: kvm_amd: TSC scaling supported Sep 12 06:03:42.430641 kernel: kvm_amd: Nested Virtualization enabled Sep 12 06:03:42.430665 kernel: kvm_amd: Nested Paging enabled Sep 12 06:03:42.431542 kernel: kvm_amd: LBR virtualization supported Sep 12 06:03:42.431566 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 06:03:42.432701 kernel: kvm_amd: Virtual GIF supported Sep 12 06:03:42.440582 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 06:03:42.440940 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 06:03:43.034782 systemd-resolved[1412]: Clock change detected. Flushing caches. Sep 12 06:03:43.034871 systemd-timesyncd[1535]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 06:03:43.034924 systemd-timesyncd[1535]: Initial clock synchronization to Fri 2025-09-12 06:03:43.034657 UTC. Sep 12 06:03:43.089720 kernel: EDAC MC: Ver: 3.0.0 Sep 12 06:03:43.098481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 06:03:43.099995 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 06:03:43.101373 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 06:03:43.102597 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 06:03:43.103836 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 06:03:43.105131 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 06:03:43.106341 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 06:03:43.107569 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 06:03:43.108752 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 06:03:43.108783 systemd[1]: Reached target paths.target - Path Units. Sep 12 06:03:43.109672 systemd[1]: Reached target timers.target - Timer Units. Sep 12 06:03:43.111425 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 06:03:43.114156 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 06:03:43.117337 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 06:03:43.118740 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 06:03:43.120048 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 06:03:43.128191 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 06:03:43.129554 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 06:03:43.131283 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 06:03:43.133081 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 06:03:43.134034 systemd[1]: Reached target basic.target - Basic System. Sep 12 06:03:43.134965 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 06:03:43.134994 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 06:03:43.136010 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 06:03:43.138055 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 06:03:43.140038 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 06:03:43.142128 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 06:03:43.144159 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 06:03:43.145665 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 06:03:43.147837 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 06:03:43.152000 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 06:03:43.155731 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 06:03:43.157967 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 06:03:43.159029 jq[1570]: false Sep 12 06:03:43.160322 oslogin_cache_refresh[1572]: Refreshing passwd entry cache Sep 12 06:03:43.160763 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Refreshing passwd entry cache Sep 12 06:03:43.162279 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 06:03:43.166505 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 06:03:43.169327 oslogin_cache_refresh[1572]: Failure getting users, quitting Sep 12 06:03:43.170911 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Failure getting users, quitting Sep 12 06:03:43.170911 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 06:03:43.170911 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Refreshing group entry cache Sep 12 06:03:43.170981 extend-filesystems[1571]: Found /dev/vda6 Sep 12 06:03:43.168412 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 06:03:43.169342 oslogin_cache_refresh[1572]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 06:03:43.168904 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 06:03:43.169390 oslogin_cache_refresh[1572]: Refreshing group entry cache Sep 12 06:03:43.171862 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 06:03:43.177684 extend-filesystems[1571]: Found /dev/vda9 Sep 12 06:03:43.178954 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Failure getting groups, quitting Sep 12 06:03:43.178954 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 06:03:43.178658 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 06:03:43.178383 oslogin_cache_refresh[1572]: Failure getting groups, quitting Sep 12 06:03:43.178395 oslogin_cache_refresh[1572]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 06:03:43.181714 extend-filesystems[1571]: Checking size of /dev/vda9 Sep 12 06:03:43.187764 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 06:03:43.189765 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 06:03:43.190040 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 06:03:43.193337 jq[1589]: true Sep 12 06:03:43.190367 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 06:03:43.190649 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 06:03:43.192387 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 06:03:43.192669 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 06:03:43.195596 extend-filesystems[1571]: Resized partition /dev/vda9 Sep 12 06:03:43.197311 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 06:03:43.197587 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 06:03:43.201679 extend-filesystems[1599]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 06:03:43.204148 update_engine[1582]: I20250912 06:03:43.201303 1582 main.cc:92] Flatcar Update Engine starting Sep 12 06:03:43.206793 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 06:03:43.224102 jq[1600]: true Sep 12 06:03:43.232882 tar[1598]: linux-amd64/LICENSE Sep 12 06:03:43.234986 (ntainerd)[1602]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 06:03:43.237746 tar[1598]: linux-amd64/helm Sep 12 06:03:43.242793 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 06:03:43.268062 extend-filesystems[1599]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 06:03:43.268062 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 06:03:43.268062 extend-filesystems[1599]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 06:03:43.266803 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 06:03:43.267664 dbus-daemon[1568]: [system] SELinux support is enabled Sep 12 06:03:43.275360 update_engine[1582]: I20250912 06:03:43.274573 1582 update_check_scheduler.cc:74] Next update check in 11m21s Sep 12 06:03:43.275390 extend-filesystems[1571]: Resized filesystem in /dev/vda9 Sep 12 06:03:43.269288 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 06:03:43.270976 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 06:03:43.283045 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 06:03:43.283075 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 06:03:43.283808 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 06:03:43.283830 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 06:03:43.285695 systemd-logind[1580]: New seat seat0. Sep 12 06:03:43.285719 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 06:03:43.285734 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 06:03:43.287217 systemd[1]: Started update-engine.service - Update Engine. Sep 12 06:03:43.288352 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 06:03:43.292116 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 06:03:43.294702 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Sep 12 06:03:43.296198 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 06:03:43.298161 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 06:03:43.342885 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 06:03:43.437214 containerd[1602]: time="2025-09-12T06:03:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 06:03:43.438807 containerd[1602]: time="2025-09-12T06:03:43.438217254Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 06:03:43.447851 containerd[1602]: time="2025-09-12T06:03:43.447821889Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.497µs" Sep 12 06:03:43.447915 containerd[1602]: time="2025-09-12T06:03:43.447900256Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 06:03:43.447967 containerd[1602]: time="2025-09-12T06:03:43.447955430Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 06:03:43.448169 containerd[1602]: time="2025-09-12T06:03:43.448152860Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 06:03:43.448237 containerd[1602]: time="2025-09-12T06:03:43.448224745Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 06:03:43.448298 containerd[1602]: time="2025-09-12T06:03:43.448286381Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 06:03:43.448406 containerd[1602]: time="2025-09-12T06:03:43.448390255Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 06:03:43.448453 containerd[1602]: time="2025-09-12T06:03:43.448442574Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 06:03:43.448751 containerd[1602]: time="2025-09-12T06:03:43.448732608Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 06:03:43.448805 containerd[1602]: time="2025-09-12T06:03:43.448793572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 06:03:43.448867 containerd[1602]: time="2025-09-12T06:03:43.448853675Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 06:03:43.448911 containerd[1602]: time="2025-09-12T06:03:43.448900903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 06:03:43.449049 containerd[1602]: time="2025-09-12T06:03:43.449034754Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 06:03:43.449343 containerd[1602]: time="2025-09-12T06:03:43.449324959Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 06:03:43.449418 containerd[1602]: time="2025-09-12T06:03:43.449403917Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 06:03:43.449471 containerd[1602]: time="2025-09-12T06:03:43.449451947Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 06:03:43.449544 containerd[1602]: time="2025-09-12T06:03:43.449531085Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 06:03:43.449871 containerd[1602]: time="2025-09-12T06:03:43.449846096Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 06:03:43.449990 containerd[1602]: time="2025-09-12T06:03:43.449975399Z" level=info msg="metadata content store policy set" policy=shared Sep 12 06:03:43.455908 containerd[1602]: time="2025-09-12T06:03:43.455865978Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 06:03:43.455961 containerd[1602]: time="2025-09-12T06:03:43.455911092Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 06:03:43.455961 containerd[1602]: time="2025-09-12T06:03:43.455926020Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 06:03:43.455961 containerd[1602]: time="2025-09-12T06:03:43.455937953Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 06:03:43.455961 containerd[1602]: time="2025-09-12T06:03:43.455950376Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 06:03:43.455961 containerd[1602]: time="2025-09-12T06:03:43.455960445Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 06:03:43.456060 containerd[1602]: time="2025-09-12T06:03:43.455975724Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 06:03:43.456060 containerd[1602]: time="2025-09-12T06:03:43.455995611Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 06:03:43.456060 containerd[1602]: time="2025-09-12T06:03:43.456020558Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 06:03:43.456060 containerd[1602]: time="2025-09-12T06:03:43.456035045Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 06:03:43.456060 containerd[1602]: time="2025-09-12T06:03:43.456044933Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 06:03:43.456060 containerd[1602]: time="2025-09-12T06:03:43.456058609Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 06:03:43.456191 containerd[1602]: time="2025-09-12T06:03:43.456171811Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 06:03:43.456214 containerd[1602]: time="2025-09-12T06:03:43.456193172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 06:03:43.456214 containerd[1602]: time="2025-09-12T06:03:43.456208761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 06:03:43.456255 containerd[1602]: time="2025-09-12T06:03:43.456218609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 06:03:43.456255 containerd[1602]: time="2025-09-12T06:03:43.456229399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 06:03:43.456255 containerd[1602]: time="2025-09-12T06:03:43.456239368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 06:03:43.456255 containerd[1602]: time="2025-09-12T06:03:43.456250880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 06:03:43.456334 containerd[1602]: time="2025-09-12T06:03:43.456261229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 06:03:43.456334 containerd[1602]: time="2025-09-12T06:03:43.456272290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 06:03:43.456334 containerd[1602]: time="2025-09-12T06:03:43.456282870Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 06:03:43.456609 containerd[1602]: time="2025-09-12T06:03:43.456582131Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 06:03:43.456723 containerd[1602]: time="2025-09-12T06:03:43.456698770Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 06:03:43.456982 containerd[1602]: time="2025-09-12T06:03:43.456925605Z" level=info msg="Start snapshots syncer" Sep 12 06:03:43.457159 containerd[1602]: time="2025-09-12T06:03:43.457142683Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 06:03:43.457712 containerd[1602]: time="2025-09-12T06:03:43.457668539Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 06:03:43.457831 containerd[1602]: time="2025-09-12T06:03:43.457732619Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 06:03:43.459203 containerd[1602]: time="2025-09-12T06:03:43.459146561Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 06:03:43.459346 containerd[1602]: time="2025-09-12T06:03:43.459292174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 06:03:43.459346 containerd[1602]: time="2025-09-12T06:03:43.459325557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 06:03:43.459346 containerd[1602]: time="2025-09-12T06:03:43.459341627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 06:03:43.459405 containerd[1602]: time="2025-09-12T06:03:43.459353149Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 06:03:43.459405 containerd[1602]: time="2025-09-12T06:03:43.459368938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 06:03:43.459405 containerd[1602]: time="2025-09-12T06:03:43.459382965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 06:03:43.459405 containerd[1602]: time="2025-09-12T06:03:43.459396390Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 06:03:43.459490 containerd[1602]: time="2025-09-12T06:03:43.459426576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 06:03:43.459490 containerd[1602]: time="2025-09-12T06:03:43.459441545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 06:03:43.459490 containerd[1602]: time="2025-09-12T06:03:43.459454539Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 06:03:43.459551 containerd[1602]: time="2025-09-12T06:03:43.459515764Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 06:03:43.459551 containerd[1602]: time="2025-09-12T06:03:43.459533918Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 06:03:43.459551 containerd[1602]: time="2025-09-12T06:03:43.459542865Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 06:03:43.459679 containerd[1602]: time="2025-09-12T06:03:43.459555378Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 06:03:43.459679 containerd[1602]: time="2025-09-12T06:03:43.459567240Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 06:03:43.459679 containerd[1602]: time="2025-09-12T06:03:43.459580335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 06:03:43.459679 containerd[1602]: time="2025-09-12T06:03:43.459592197Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 06:03:43.459679 containerd[1602]: time="2025-09-12T06:03:43.459616002Z" level=info msg="runtime interface created" Sep 12 06:03:43.459679 containerd[1602]: time="2025-09-12T06:03:43.459622454Z" level=info msg="created NRI interface" Sep 12 06:03:43.459679 containerd[1602]: time="2025-09-12T06:03:43.459656458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 06:03:43.459679 containerd[1602]: time="2025-09-12T06:03:43.459676956Z" level=info msg="Connect containerd service" Sep 12 06:03:43.459835 containerd[1602]: time="2025-09-12T06:03:43.459719125Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 06:03:43.460854 containerd[1602]: time="2025-09-12T06:03:43.460816013Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 06:03:43.616804 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 06:03:43.646876 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 06:03:43.652030 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 06:03:43.661583 containerd[1602]: time="2025-09-12T06:03:43.661552307Z" level=info msg="Start subscribing containerd event" Sep 12 06:03:43.661937 containerd[1602]: time="2025-09-12T06:03:43.661879100Z" level=info msg="Start recovering state" Sep 12 06:03:43.662059 containerd[1602]: time="2025-09-12T06:03:43.661978566Z" level=info msg="Start event monitor" Sep 12 06:03:43.662059 containerd[1602]: time="2025-09-12T06:03:43.661991751Z" level=info msg="Start cni network conf syncer for default" Sep 12 06:03:43.662059 containerd[1602]: time="2025-09-12T06:03:43.661998423Z" level=info msg="Start streaming server" Sep 12 06:03:43.662059 containerd[1602]: time="2025-09-12T06:03:43.662007681Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 06:03:43.662059 containerd[1602]: time="2025-09-12T06:03:43.662014984Z" level=info msg="runtime interface starting up..." Sep 12 06:03:43.662059 containerd[1602]: time="2025-09-12T06:03:43.662020715Z" level=info msg="starting plugins..." Sep 12 06:03:43.662059 containerd[1602]: time="2025-09-12T06:03:43.662034491Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 06:03:43.664679 containerd[1602]: time="2025-09-12T06:03:43.661839135Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 06:03:43.664679 containerd[1602]: time="2025-09-12T06:03:43.662166729Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 06:03:43.664679 containerd[1602]: time="2025-09-12T06:03:43.662227884Z" level=info msg="containerd successfully booted in 0.225608s" Sep 12 06:03:43.662415 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 06:03:43.672298 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 06:03:43.672595 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 06:03:43.675348 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 06:03:43.686658 tar[1598]: linux-amd64/README.md Sep 12 06:03:43.698555 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 06:03:43.701509 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 06:03:43.703700 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 06:03:43.704934 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 06:03:43.705751 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 06:03:44.601895 systemd-networkd[1495]: eth0: Gained IPv6LL Sep 12 06:03:44.605993 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 06:03:44.607948 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 06:03:44.610899 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 06:03:44.614698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 06:03:44.617164 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 06:03:44.658577 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 06:03:44.676419 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 06:03:44.676760 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 06:03:44.678414 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 06:03:45.811416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 06:03:45.813045 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 06:03:45.814550 systemd[1]: Startup finished in 3.040s (kernel) + 5.620s (initrd) + 5.031s (userspace) = 13.692s. Sep 12 06:03:45.841001 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 06:03:46.489492 kubelet[1702]: E0912 06:03:46.489363 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 06:03:46.493787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 06:03:46.493984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 06:03:46.494455 systemd[1]: kubelet.service: Consumed 1.668s CPU time, 266.1M memory peak. Sep 12 06:03:48.240078 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 06:03:48.241539 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:46036.service - OpenSSH per-connection server daemon (10.0.0.1:46036). Sep 12 06:03:48.304449 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 46036 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:03:48.305869 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:03:48.312498 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 06:03:48.313723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 06:03:48.319873 systemd-logind[1580]: New session 1 of user core. Sep 12 06:03:48.338665 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 06:03:48.341531 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 06:03:48.355977 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 06:03:48.358156 systemd-logind[1580]: New session c1 of user core. Sep 12 06:03:48.504788 systemd[1720]: Queued start job for default target default.target. Sep 12 06:03:48.520902 systemd[1720]: Created slice app.slice - User Application Slice. Sep 12 06:03:48.520927 systemd[1720]: Reached target paths.target - Paths. Sep 12 06:03:48.520968 systemd[1720]: Reached target timers.target - Timers. Sep 12 06:03:48.522460 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 06:03:48.533901 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 06:03:48.534043 systemd[1720]: Reached target sockets.target - Sockets. Sep 12 06:03:48.534086 systemd[1720]: Reached target basic.target - Basic System. Sep 12 06:03:48.534126 systemd[1720]: Reached target default.target - Main User Target. Sep 12 06:03:48.534157 systemd[1720]: Startup finished in 169ms. Sep 12 06:03:48.534350 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 06:03:48.535922 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 06:03:48.605424 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:46046.service - OpenSSH per-connection server daemon (10.0.0.1:46046). Sep 12 06:03:48.664155 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 46046 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:03:48.666928 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:03:48.677908 systemd-logind[1580]: New session 2 of user core. Sep 12 06:03:48.690868 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 06:03:48.750205 sshd[1734]: Connection closed by 10.0.0.1 port 46046 Sep 12 06:03:48.751156 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 12 06:03:48.760215 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:46046.service: Deactivated successfully. Sep 12 06:03:48.763196 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 06:03:48.763926 systemd-logind[1580]: Session 2 logged out. Waiting for processes to exit. Sep 12 06:03:48.766831 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:46062.service - OpenSSH per-connection server daemon (10.0.0.1:46062). Sep 12 06:03:48.767471 systemd-logind[1580]: Removed session 2. Sep 12 06:03:48.817755 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 46062 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:03:48.818998 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:03:48.823696 systemd-logind[1580]: New session 3 of user core. Sep 12 06:03:48.838770 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 06:03:48.888058 sshd[1743]: Connection closed by 10.0.0.1 port 46062 Sep 12 06:03:48.888403 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 12 06:03:48.897229 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:46062.service: Deactivated successfully. Sep 12 06:03:48.899060 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 06:03:48.899782 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Sep 12 06:03:48.902418 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:46074.service - OpenSSH per-connection server daemon (10.0.0.1:46074). Sep 12 06:03:48.902936 systemd-logind[1580]: Removed session 3. Sep 12 06:03:48.954907 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 46074 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:03:48.956214 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:03:48.960891 systemd-logind[1580]: New session 4 of user core. Sep 12 06:03:48.974765 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 06:03:49.028009 sshd[1752]: Connection closed by 10.0.0.1 port 46074 Sep 12 06:03:49.028518 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 12 06:03:49.041245 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:46074.service: Deactivated successfully. Sep 12 06:03:49.043092 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 06:03:49.043862 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Sep 12 06:03:49.046847 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:46086.service - OpenSSH per-connection server daemon (10.0.0.1:46086). Sep 12 06:03:49.047619 systemd-logind[1580]: Removed session 4. Sep 12 06:03:49.097467 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 46086 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:03:49.098627 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:03:49.103055 systemd-logind[1580]: New session 5 of user core. Sep 12 06:03:49.124808 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 06:03:49.182896 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 06:03:49.183213 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 06:03:49.203622 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 12 06:03:49.205310 sshd[1761]: Connection closed by 10.0.0.1 port 46086 Sep 12 06:03:49.205754 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Sep 12 06:03:49.219224 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:46086.service: Deactivated successfully. Sep 12 06:03:49.221082 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 06:03:49.221780 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Sep 12 06:03:49.224526 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:46100.service - OpenSSH per-connection server daemon (10.0.0.1:46100). Sep 12 06:03:49.225129 systemd-logind[1580]: Removed session 5. Sep 12 06:03:49.278750 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 46100 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:03:49.279927 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:03:49.284333 systemd-logind[1580]: New session 6 of user core. Sep 12 06:03:49.293771 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 06:03:49.347201 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 06:03:49.347535 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 06:03:49.481870 sudo[1773]: pam_unix(sudo:session): session closed for user root Sep 12 06:03:49.488481 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 06:03:49.488811 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 06:03:49.499529 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 06:03:49.544739 augenrules[1795]: No rules Sep 12 06:03:49.546483 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 06:03:49.546823 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 06:03:49.547906 sudo[1772]: pam_unix(sudo:session): session closed for user root Sep 12 06:03:49.549498 sshd[1771]: Connection closed by 10.0.0.1 port 46100 Sep 12 06:03:49.549856 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Sep 12 06:03:49.558186 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:46100.service: Deactivated successfully. Sep 12 06:03:49.560040 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 06:03:49.560863 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Sep 12 06:03:49.563481 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:46108.service - OpenSSH per-connection server daemon (10.0.0.1:46108). Sep 12 06:03:49.564260 systemd-logind[1580]: Removed session 6. Sep 12 06:03:49.620479 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 46108 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:03:49.621956 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:03:49.626474 systemd-logind[1580]: New session 7 of user core. Sep 12 06:03:49.640765 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 06:03:49.693935 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 06:03:49.694253 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 06:03:50.338450 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 06:03:50.362003 (dockerd)[1829]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 06:03:50.997903 dockerd[1829]: time="2025-09-12T06:03:50.997806963Z" level=info msg="Starting up" Sep 12 06:03:50.999190 dockerd[1829]: time="2025-09-12T06:03:50.999146716Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 06:03:51.016472 dockerd[1829]: time="2025-09-12T06:03:51.016392555Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 06:03:51.764713 dockerd[1829]: time="2025-09-12T06:03:51.764620196Z" level=info msg="Loading containers: start." Sep 12 06:03:51.777669 kernel: Initializing XFRM netlink socket Sep 12 06:03:52.053783 systemd-networkd[1495]: docker0: Link UP Sep 12 06:03:52.059348 dockerd[1829]: time="2025-09-12T06:03:52.059293489Z" level=info msg="Loading containers: done." Sep 12 06:03:52.077339 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck581925111-merged.mount: Deactivated successfully. Sep 12 06:03:52.078749 dockerd[1829]: time="2025-09-12T06:03:52.078701372Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 06:03:52.078823 dockerd[1829]: time="2025-09-12T06:03:52.078791351Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 06:03:52.078919 dockerd[1829]: time="2025-09-12T06:03:52.078885949Z" level=info msg="Initializing buildkit" Sep 12 06:03:52.109385 dockerd[1829]: time="2025-09-12T06:03:52.109328931Z" level=info msg="Completed buildkit initialization" Sep 12 06:03:52.116584 dockerd[1829]: time="2025-09-12T06:03:52.116549234Z" level=info msg="Daemon has completed initialization" Sep 12 06:03:52.116713 dockerd[1829]: time="2025-09-12T06:03:52.116655343Z" level=info msg="API listen on /run/docker.sock" Sep 12 06:03:52.116764 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 06:03:52.955567 containerd[1602]: time="2025-09-12T06:03:52.955505747Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 06:03:54.024996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196638541.mount: Deactivated successfully. Sep 12 06:03:55.245737 containerd[1602]: time="2025-09-12T06:03:55.245658338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:55.246477 containerd[1602]: time="2025-09-12T06:03:55.246402384Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 06:03:55.247553 containerd[1602]: time="2025-09-12T06:03:55.247505403Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:55.250002 containerd[1602]: time="2025-09-12T06:03:55.249965628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:55.251012 containerd[1602]: time="2025-09-12T06:03:55.250969961Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.295417727s" Sep 12 06:03:55.251012 containerd[1602]: time="2025-09-12T06:03:55.251004526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 06:03:55.251849 containerd[1602]: time="2025-09-12T06:03:55.251821268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 06:03:56.658702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 06:03:56.660978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 06:03:56.690514 containerd[1602]: time="2025-09-12T06:03:56.690475001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:56.691321 containerd[1602]: time="2025-09-12T06:03:56.691272107Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 06:03:56.693301 containerd[1602]: time="2025-09-12T06:03:56.693271146Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:56.696490 containerd[1602]: time="2025-09-12T06:03:56.696462663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:56.697955 containerd[1602]: time="2025-09-12T06:03:56.697928763Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.446074663s" Sep 12 06:03:56.698009 containerd[1602]: time="2025-09-12T06:03:56.697957947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 06:03:56.698500 containerd[1602]: time="2025-09-12T06:03:56.698460680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 06:03:56.949594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 06:03:56.976071 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 06:03:57.096904 kubelet[2112]: E0912 06:03:57.096654 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 06:03:57.103896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 06:03:57.104145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 06:03:57.104530 systemd[1]: kubelet.service: Consumed 549ms CPU time, 111.4M memory peak. Sep 12 06:03:59.424541 containerd[1602]: time="2025-09-12T06:03:59.424472740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:59.425509 containerd[1602]: time="2025-09-12T06:03:59.425480199Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 06:03:59.426743 containerd[1602]: time="2025-09-12T06:03:59.426695990Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:59.429156 containerd[1602]: time="2025-09-12T06:03:59.429105530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:03:59.430146 containerd[1602]: time="2025-09-12T06:03:59.430106176Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.73161019s" Sep 12 06:03:59.430197 containerd[1602]: time="2025-09-12T06:03:59.430146071Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 06:03:59.430767 containerd[1602]: time="2025-09-12T06:03:59.430720038Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 06:04:00.495224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484342880.mount: Deactivated successfully. Sep 12 06:04:01.549225 containerd[1602]: time="2025-09-12T06:04:01.549162330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:01.630819 containerd[1602]: time="2025-09-12T06:04:01.630740568Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 06:04:01.634767 containerd[1602]: time="2025-09-12T06:04:01.634697500Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:01.636525 containerd[1602]: time="2025-09-12T06:04:01.636476446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:01.637013 containerd[1602]: time="2025-09-12T06:04:01.636963350Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.206214498s" Sep 12 06:04:01.637052 containerd[1602]: time="2025-09-12T06:04:01.637012251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 06:04:01.637543 containerd[1602]: time="2025-09-12T06:04:01.637506158Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 06:04:02.288368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038644509.mount: Deactivated successfully. Sep 12 06:04:03.279101 containerd[1602]: time="2025-09-12T06:04:03.279031991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:03.279711 containerd[1602]: time="2025-09-12T06:04:03.279661131Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 06:04:03.280879 containerd[1602]: time="2025-09-12T06:04:03.280848789Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:03.283270 containerd[1602]: time="2025-09-12T06:04:03.283213354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:03.284274 containerd[1602]: time="2025-09-12T06:04:03.284227647Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.646639866s" Sep 12 06:04:03.284274 containerd[1602]: time="2025-09-12T06:04:03.284269625Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 06:04:03.284956 containerd[1602]: time="2025-09-12T06:04:03.284917230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 06:04:04.299487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412762567.mount: Deactivated successfully. Sep 12 06:04:04.305727 containerd[1602]: time="2025-09-12T06:04:04.305681101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 06:04:04.306435 containerd[1602]: time="2025-09-12T06:04:04.306402334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 06:04:04.308048 containerd[1602]: time="2025-09-12T06:04:04.308004399Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 06:04:04.309822 containerd[1602]: time="2025-09-12T06:04:04.309789397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 06:04:04.310381 containerd[1602]: time="2025-09-12T06:04:04.310349408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.025394016s" Sep 12 06:04:04.310381 containerd[1602]: time="2025-09-12T06:04:04.310377761Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 06:04:04.310917 containerd[1602]: time="2025-09-12T06:04:04.310867299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 06:04:04.806063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142267637.mount: Deactivated successfully. Sep 12 06:04:06.736980 containerd[1602]: time="2025-09-12T06:04:06.736911465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:06.738101 containerd[1602]: time="2025-09-12T06:04:06.738038378Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 06:04:06.739311 containerd[1602]: time="2025-09-12T06:04:06.739253728Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:06.741953 containerd[1602]: time="2025-09-12T06:04:06.741910902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:06.742933 containerd[1602]: time="2025-09-12T06:04:06.742899256Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.431982524s" Sep 12 06:04:06.742933 containerd[1602]: time="2025-09-12T06:04:06.742929112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 06:04:07.151126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 06:04:07.152839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 06:04:07.353590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 06:04:07.368013 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 06:04:07.405467 kubelet[2276]: E0912 06:04:07.405285 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 06:04:07.409586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 06:04:07.409844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 06:04:07.410249 systemd[1]: kubelet.service: Consumed 210ms CPU time, 110.8M memory peak. Sep 12 06:04:09.699277 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 06:04:09.699449 systemd[1]: kubelet.service: Consumed 210ms CPU time, 110.8M memory peak. Sep 12 06:04:09.701620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 06:04:09.729318 systemd[1]: Reload requested from client PID 2292 ('systemctl') (unit session-7.scope)... Sep 12 06:04:09.729336 systemd[1]: Reloading... Sep 12 06:04:09.796698 zram_generator::config[2335]: No configuration found. Sep 12 06:04:10.410970 systemd[1]: Reloading finished in 681 ms. Sep 12 06:04:10.479329 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 06:04:10.479429 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 06:04:10.479757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 06:04:10.479803 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.3M memory peak. Sep 12 06:04:10.481333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 06:04:10.651581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 06:04:10.656046 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 06:04:10.692897 kubelet[2383]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 06:04:10.692897 kubelet[2383]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 06:04:10.692897 kubelet[2383]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 06:04:10.693219 kubelet[2383]: I0912 06:04:10.692893 2383 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 06:04:11.033600 kubelet[2383]: I0912 06:04:11.033429 2383 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 06:04:11.033600 kubelet[2383]: I0912 06:04:11.033486 2383 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 06:04:11.033789 kubelet[2383]: I0912 06:04:11.033761 2383 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 06:04:11.061663 kubelet[2383]: E0912 06:04:11.061186 2383 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 06:04:11.064415 kubelet[2383]: I0912 06:04:11.064382 2383 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 06:04:11.076132 kubelet[2383]: I0912 06:04:11.076054 2383 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 06:04:11.084653 kubelet[2383]: I0912 06:04:11.084592 2383 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 06:04:11.085173 kubelet[2383]: I0912 06:04:11.085101 2383 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 06:04:11.085437 kubelet[2383]: I0912 06:04:11.085166 2383 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 06:04:11.085589 kubelet[2383]: I0912 06:04:11.085464 2383 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 06:04:11.085589 kubelet[2383]: I0912 06:04:11.085486 2383 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 06:04:11.085760 kubelet[2383]: I0912 06:04:11.085739 2383 state_mem.go:36] "Initialized new in-memory state store" Sep 12 06:04:11.088334 kubelet[2383]: I0912 06:04:11.088299 2383 kubelet.go:480] "Attempting to sync node with API server" Sep 12 06:04:11.088334 kubelet[2383]: I0912 06:04:11.088324 2383 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 06:04:11.088522 kubelet[2383]: I0912 06:04:11.088360 2383 kubelet.go:386] "Adding apiserver pod source" Sep 12 06:04:11.088522 kubelet[2383]: I0912 06:04:11.088381 2383 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 06:04:11.091946 kubelet[2383]: E0912 06:04:11.091424 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 06:04:11.091946 kubelet[2383]: E0912 06:04:11.091524 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 06:04:11.093403 kubelet[2383]: I0912 06:04:11.093369 2383 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 06:04:11.098538 kubelet[2383]: I0912 06:04:11.098153 2383 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 06:04:11.098999 kubelet[2383]: W0912 06:04:11.098976 2383 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 06:04:11.103248 kubelet[2383]: I0912 06:04:11.103188 2383 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 06:04:11.103434 kubelet[2383]: I0912 06:04:11.103285 2383 server.go:1289] "Started kubelet" Sep 12 06:04:11.104256 kubelet[2383]: I0912 06:04:11.104171 2383 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 06:04:11.105844 kubelet[2383]: I0912 06:04:11.105779 2383 server.go:317] "Adding debug handlers to kubelet server" Sep 12 06:04:11.105918 kubelet[2383]: I0912 06:04:11.105869 2383 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 06:04:11.107662 kubelet[2383]: I0912 06:04:11.106547 2383 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 06:04:11.107662 kubelet[2383]: I0912 06:04:11.107396 2383 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 06:04:11.111298 kubelet[2383]: E0912 06:04:11.111244 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.111597 kubelet[2383]: I0912 06:04:11.111347 2383 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 06:04:11.111948 kubelet[2383]: I0912 06:04:11.111922 2383 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 06:04:11.112072 kubelet[2383]: I0912 06:04:11.112045 2383 reconciler.go:26] "Reconciler: start to sync state" Sep 12 06:04:11.112711 kubelet[2383]: I0912 06:04:11.112687 2383 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 06:04:11.113277 kubelet[2383]: E0912 06:04:11.113244 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 06:04:11.113392 kubelet[2383]: E0912 06:04:11.113347 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Sep 12 06:04:11.114027 kubelet[2383]: I0912 06:04:11.114002 2383 factory.go:223] Registration of the systemd container factory successfully Sep 12 06:04:11.114112 kubelet[2383]: I0912 06:04:11.114086 2383 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 06:04:11.114492 kubelet[2383]: E0912 06:04:11.113505 2383 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186473c687f8077e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 06:04:11.103233918 +0000 UTC m=+0.443218232,LastTimestamp:2025-09-12 06:04:11.103233918 +0000 UTC m=+0.443218232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 06:04:11.115574 kubelet[2383]: E0912 06:04:11.115513 2383 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 06:04:11.115798 kubelet[2383]: I0912 06:04:11.115762 2383 factory.go:223] Registration of the containerd container factory successfully Sep 12 06:04:11.130057 kubelet[2383]: I0912 06:04:11.129991 2383 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 06:04:11.130057 kubelet[2383]: I0912 06:04:11.130016 2383 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 06:04:11.130057 kubelet[2383]: I0912 06:04:11.130035 2383 state_mem.go:36] "Initialized new in-memory state store" Sep 12 06:04:11.212281 kubelet[2383]: E0912 06:04:11.212187 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.312754 kubelet[2383]: E0912 06:04:11.312592 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.314091 kubelet[2383]: E0912 06:04:11.314044 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Sep 12 06:04:11.413481 kubelet[2383]: E0912 06:04:11.413438 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.514095 kubelet[2383]: E0912 06:04:11.514021 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.614244 kubelet[2383]: E0912 06:04:11.614135 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.714664 kubelet[2383]: E0912 06:04:11.714584 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.715225 kubelet[2383]: E0912 06:04:11.715130 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Sep 12 06:04:11.815731 kubelet[2383]: E0912 06:04:11.815677 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.827262 kubelet[2383]: I0912 06:04:11.827199 2383 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 06:04:11.828900 kubelet[2383]: I0912 06:04:11.828862 2383 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 06:04:11.829013 kubelet[2383]: I0912 06:04:11.828913 2383 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 06:04:11.829013 kubelet[2383]: I0912 06:04:11.828941 2383 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 06:04:11.829013 kubelet[2383]: I0912 06:04:11.828955 2383 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 06:04:11.829144 kubelet[2383]: E0912 06:04:11.829017 2383 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 06:04:11.830267 kubelet[2383]: E0912 06:04:11.829893 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 06:04:11.861651 kubelet[2383]: I0912 06:04:11.861590 2383 policy_none.go:49] "None policy: Start" Sep 12 06:04:11.861751 kubelet[2383]: I0912 06:04:11.861701 2383 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 06:04:11.861751 kubelet[2383]: I0912 06:04:11.861730 2383 state_mem.go:35] "Initializing new in-memory state store" Sep 12 06:04:11.915863 kubelet[2383]: E0912 06:04:11.915802 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:11.930131 kubelet[2383]: E0912 06:04:11.930065 2383 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 06:04:12.016704 kubelet[2383]: E0912 06:04:12.016605 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:12.038537 kubelet[2383]: E0912 06:04:12.038481 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 06:04:12.117267 kubelet[2383]: E0912 06:04:12.117220 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:12.127530 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 06:04:12.130995 kubelet[2383]: E0912 06:04:12.130959 2383 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 06:04:12.141668 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 06:04:12.145679 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 06:04:12.163838 kubelet[2383]: E0912 06:04:12.163802 2383 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 06:04:12.164104 kubelet[2383]: I0912 06:04:12.164071 2383 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 06:04:12.164220 kubelet[2383]: I0912 06:04:12.164092 2383 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 06:04:12.164844 kubelet[2383]: I0912 06:04:12.164376 2383 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 06:04:12.165573 kubelet[2383]: E0912 06:04:12.165550 2383 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 06:04:12.165659 kubelet[2383]: E0912 06:04:12.165596 2383 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 06:04:12.268037 kubelet[2383]: I0912 06:04:12.267880 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 06:04:12.268475 kubelet[2383]: E0912 06:04:12.268424 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Sep 12 06:04:12.320530 kubelet[2383]: E0912 06:04:12.320493 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 06:04:12.472512 kubelet[2383]: I0912 06:04:12.472427 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 06:04:12.473146 kubelet[2383]: E0912 06:04:12.473100 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Sep 12 06:04:12.512213 kubelet[2383]: E0912 06:04:12.512129 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 06:04:12.516077 kubelet[2383]: E0912 06:04:12.516008 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Sep 12 06:04:12.620720 kubelet[2383]: I0912 06:04:12.620519 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:12.620720 kubelet[2383]: I0912 06:04:12.620607 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:12.620887 kubelet[2383]: I0912 06:04:12.620718 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:12.620887 kubelet[2383]: I0912 06:04:12.620768 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:12.620887 kubelet[2383]: I0912 06:04:12.620795 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:12.844720 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 06:04:12.854063 kubelet[2383]: E0912 06:04:12.854001 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 06:04:12.856045 containerd[1602]: time="2025-09-12T06:04:12.855985909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 06:04:12.862114 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 06:04:12.864716 kubelet[2383]: E0912 06:04:12.864684 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 06:04:12.867419 systemd[1]: Created slice kubepods-burstable-pod3e02a8d4ea24e1797074fd2a57060163.slice - libcontainer container kubepods-burstable-pod3e02a8d4ea24e1797074fd2a57060163.slice. Sep 12 06:04:12.869347 kubelet[2383]: E0912 06:04:12.869313 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 06:04:12.875123 kubelet[2383]: I0912 06:04:12.875015 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 06:04:12.875652 kubelet[2383]: E0912 06:04:12.875600 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Sep 12 06:04:12.879499 containerd[1602]: time="2025-09-12T06:04:12.879442347Z" level=info msg="connecting to shim 2a959969e31f1a7ebd5350b47690539d6d68a4f58a7ed982952a0eaf3637323a" address="unix:///run/containerd/s/131c52c116238139c43e9e4f6077b4da3d136f8f7b7995e94af78ae8cef7e871" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:12.907808 systemd[1]: Started cri-containerd-2a959969e31f1a7ebd5350b47690539d6d68a4f58a7ed982952a0eaf3637323a.scope - libcontainer container 2a959969e31f1a7ebd5350b47690539d6d68a4f58a7ed982952a0eaf3637323a. Sep 12 06:04:12.922229 kubelet[2383]: I0912 06:04:12.922192 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e02a8d4ea24e1797074fd2a57060163-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e02a8d4ea24e1797074fd2a57060163\") " pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:12.922229 kubelet[2383]: I0912 06:04:12.922224 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e02a8d4ea24e1797074fd2a57060163-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e02a8d4ea24e1797074fd2a57060163\") " pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:12.922360 kubelet[2383]: I0912 06:04:12.922242 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e02a8d4ea24e1797074fd2a57060163-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e02a8d4ea24e1797074fd2a57060163\") " pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:12.922360 kubelet[2383]: I0912 06:04:12.922260 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 06:04:12.951376 containerd[1602]: time="2025-09-12T06:04:12.951319959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a959969e31f1a7ebd5350b47690539d6d68a4f58a7ed982952a0eaf3637323a\"" Sep 12 06:04:12.957464 containerd[1602]: time="2025-09-12T06:04:12.957411555Z" level=info msg="CreateContainer within sandbox \"2a959969e31f1a7ebd5350b47690539d6d68a4f58a7ed982952a0eaf3637323a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 06:04:12.967646 containerd[1602]: time="2025-09-12T06:04:12.967578746Z" level=info msg="Container 2f51b2358f5925cfce4c073aa6c06aec08cfb802f0561ad8278dd5b3028ab2e4: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:12.974642 containerd[1602]: time="2025-09-12T06:04:12.974614192Z" level=info msg="CreateContainer within sandbox \"2a959969e31f1a7ebd5350b47690539d6d68a4f58a7ed982952a0eaf3637323a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2f51b2358f5925cfce4c073aa6c06aec08cfb802f0561ad8278dd5b3028ab2e4\"" Sep 12 06:04:12.975157 containerd[1602]: time="2025-09-12T06:04:12.975133256Z" level=info msg="StartContainer for \"2f51b2358f5925cfce4c073aa6c06aec08cfb802f0561ad8278dd5b3028ab2e4\"" Sep 12 06:04:12.976168 containerd[1602]: time="2025-09-12T06:04:12.976131548Z" level=info msg="connecting to shim 2f51b2358f5925cfce4c073aa6c06aec08cfb802f0561ad8278dd5b3028ab2e4" address="unix:///run/containerd/s/131c52c116238139c43e9e4f6077b4da3d136f8f7b7995e94af78ae8cef7e871" protocol=ttrpc version=3 Sep 12 06:04:13.005906 systemd[1]: Started cri-containerd-2f51b2358f5925cfce4c073aa6c06aec08cfb802f0561ad8278dd5b3028ab2e4.scope - libcontainer container 2f51b2358f5925cfce4c073aa6c06aec08cfb802f0561ad8278dd5b3028ab2e4. Sep 12 06:04:13.006060 kubelet[2383]: E0912 06:04:13.005982 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 06:04:13.051292 containerd[1602]: time="2025-09-12T06:04:13.051252153Z" level=info msg="StartContainer for \"2f51b2358f5925cfce4c073aa6c06aec08cfb802f0561ad8278dd5b3028ab2e4\" returns successfully" Sep 12 06:04:13.166166 containerd[1602]: time="2025-09-12T06:04:13.166122347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 06:04:13.171105 containerd[1602]: time="2025-09-12T06:04:13.171069185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e02a8d4ea24e1797074fd2a57060163,Namespace:kube-system,Attempt:0,}" Sep 12 06:04:13.190244 containerd[1602]: time="2025-09-12T06:04:13.190070066Z" level=info msg="connecting to shim ddec90c67bd9c08428c6f346edf940068d8096b1d0c1b464d95306c5e5f871c4" address="unix:///run/containerd/s/dbaf97bbec2be5a5ffa00fb99ac0953c9581ebd34022b17731a0fe4a63c9ac1c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:13.195709 kubelet[2383]: E0912 06:04:13.195472 2383 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 06:04:13.202231 containerd[1602]: time="2025-09-12T06:04:13.202189358Z" level=info msg="connecting to shim 79cccf17e8aff88317a910be87fbec356bbfca7bc7f1638ee87f546576a03b27" address="unix:///run/containerd/s/8fb054b2374a59a0c80c9e14871dff64e2c9a0b14e269a64e7f415b9bd0a3cac" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:13.219783 systemd[1]: Started cri-containerd-ddec90c67bd9c08428c6f346edf940068d8096b1d0c1b464d95306c5e5f871c4.scope - libcontainer container ddec90c67bd9c08428c6f346edf940068d8096b1d0c1b464d95306c5e5f871c4. Sep 12 06:04:13.225683 systemd[1]: Started cri-containerd-79cccf17e8aff88317a910be87fbec356bbfca7bc7f1638ee87f546576a03b27.scope - libcontainer container 79cccf17e8aff88317a910be87fbec356bbfca7bc7f1638ee87f546576a03b27. Sep 12 06:04:13.272866 containerd[1602]: time="2025-09-12T06:04:13.272811655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e02a8d4ea24e1797074fd2a57060163,Namespace:kube-system,Attempt:0,} returns sandbox id \"79cccf17e8aff88317a910be87fbec356bbfca7bc7f1638ee87f546576a03b27\"" Sep 12 06:04:13.276900 containerd[1602]: time="2025-09-12T06:04:13.276831135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddec90c67bd9c08428c6f346edf940068d8096b1d0c1b464d95306c5e5f871c4\"" Sep 12 06:04:13.279451 containerd[1602]: time="2025-09-12T06:04:13.279426603Z" level=info msg="CreateContainer within sandbox \"79cccf17e8aff88317a910be87fbec356bbfca7bc7f1638ee87f546576a03b27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 06:04:13.282735 containerd[1602]: time="2025-09-12T06:04:13.282620784Z" level=info msg="CreateContainer within sandbox \"ddec90c67bd9c08428c6f346edf940068d8096b1d0c1b464d95306c5e5f871c4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 06:04:13.291339 containerd[1602]: time="2025-09-12T06:04:13.291299603Z" level=info msg="Container d981d3c60e26546fdd566abe3a32aad58d67a89cdbab7cfa797fe36fb5425c0c: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:13.294756 containerd[1602]: time="2025-09-12T06:04:13.294731400Z" level=info msg="Container 1b0bd83b5df0391364cbb8971284ce2b97fa6ef15251fc9a5edee4641008f57b: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:13.299859 containerd[1602]: time="2025-09-12T06:04:13.299820356Z" level=info msg="CreateContainer within sandbox \"79cccf17e8aff88317a910be87fbec356bbfca7bc7f1638ee87f546576a03b27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d981d3c60e26546fdd566abe3a32aad58d67a89cdbab7cfa797fe36fb5425c0c\"" Sep 12 06:04:13.300289 containerd[1602]: time="2025-09-12T06:04:13.300243209Z" level=info msg="StartContainer for \"d981d3c60e26546fdd566abe3a32aad58d67a89cdbab7cfa797fe36fb5425c0c\"" Sep 12 06:04:13.301243 containerd[1602]: time="2025-09-12T06:04:13.301212647Z" level=info msg="connecting to shim d981d3c60e26546fdd566abe3a32aad58d67a89cdbab7cfa797fe36fb5425c0c" address="unix:///run/containerd/s/8fb054b2374a59a0c80c9e14871dff64e2c9a0b14e269a64e7f415b9bd0a3cac" protocol=ttrpc version=3 Sep 12 06:04:13.305887 containerd[1602]: time="2025-09-12T06:04:13.305858832Z" level=info msg="CreateContainer within sandbox \"ddec90c67bd9c08428c6f346edf940068d8096b1d0c1b464d95306c5e5f871c4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b0bd83b5df0391364cbb8971284ce2b97fa6ef15251fc9a5edee4641008f57b\"" Sep 12 06:04:13.306289 containerd[1602]: time="2025-09-12T06:04:13.306250216Z" level=info msg="StartContainer for \"1b0bd83b5df0391364cbb8971284ce2b97fa6ef15251fc9a5edee4641008f57b\"" Sep 12 06:04:13.307403 containerd[1602]: time="2025-09-12T06:04:13.307357904Z" level=info msg="connecting to shim 1b0bd83b5df0391364cbb8971284ce2b97fa6ef15251fc9a5edee4641008f57b" address="unix:///run/containerd/s/dbaf97bbec2be5a5ffa00fb99ac0953c9581ebd34022b17731a0fe4a63c9ac1c" protocol=ttrpc version=3 Sep 12 06:04:13.322775 systemd[1]: Started cri-containerd-d981d3c60e26546fdd566abe3a32aad58d67a89cdbab7cfa797fe36fb5425c0c.scope - libcontainer container d981d3c60e26546fdd566abe3a32aad58d67a89cdbab7cfa797fe36fb5425c0c. Sep 12 06:04:13.327104 systemd[1]: Started cri-containerd-1b0bd83b5df0391364cbb8971284ce2b97fa6ef15251fc9a5edee4641008f57b.scope - libcontainer container 1b0bd83b5df0391364cbb8971284ce2b97fa6ef15251fc9a5edee4641008f57b. Sep 12 06:04:13.378962 containerd[1602]: time="2025-09-12T06:04:13.378909765Z" level=info msg="StartContainer for \"d981d3c60e26546fdd566abe3a32aad58d67a89cdbab7cfa797fe36fb5425c0c\" returns successfully" Sep 12 06:04:13.381281 containerd[1602]: time="2025-09-12T06:04:13.381230578Z" level=info msg="StartContainer for \"1b0bd83b5df0391364cbb8971284ce2b97fa6ef15251fc9a5edee4641008f57b\" returns successfully" Sep 12 06:04:13.677528 kubelet[2383]: I0912 06:04:13.677224 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 06:04:13.839945 kubelet[2383]: E0912 06:04:13.839898 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 06:04:13.845751 kubelet[2383]: E0912 06:04:13.845723 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 06:04:13.846476 kubelet[2383]: E0912 06:04:13.846450 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 06:04:14.591628 kubelet[2383]: E0912 06:04:14.591538 2383 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 06:04:14.822214 kubelet[2383]: I0912 06:04:14.822157 2383 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 06:04:14.822214 kubelet[2383]: E0912 06:04:14.822197 2383 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 06:04:14.833215 kubelet[2383]: E0912 06:04:14.833184 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:14.850545 kubelet[2383]: E0912 06:04:14.850420 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 06:04:14.854452 kubelet[2383]: E0912 06:04:14.854420 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 06:04:14.933531 kubelet[2383]: E0912 06:04:14.933465 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:15.034186 kubelet[2383]: E0912 06:04:15.034148 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 06:04:15.090730 kubelet[2383]: I0912 06:04:15.090705 2383 apiserver.go:52] "Watching apiserver" Sep 12 06:04:15.111661 kubelet[2383]: I0912 06:04:15.111552 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 06:04:15.112769 kubelet[2383]: I0912 06:04:15.112722 2383 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 06:04:15.165134 kubelet[2383]: E0912 06:04:15.165089 2383 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 06:04:15.165134 kubelet[2383]: I0912 06:04:15.165121 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:15.166934 kubelet[2383]: E0912 06:04:15.166739 2383 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:15.166934 kubelet[2383]: I0912 06:04:15.166765 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:15.168017 kubelet[2383]: E0912 06:04:15.167986 2383 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:16.154322 kubelet[2383]: I0912 06:04:16.154259 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:16.684615 systemd[1]: Reload requested from client PID 2670 ('systemctl') (unit session-7.scope)... Sep 12 06:04:16.684647 systemd[1]: Reloading... Sep 12 06:04:16.767680 zram_generator::config[2716]: No configuration found. Sep 12 06:04:17.032225 systemd[1]: Reloading finished in 347 ms. Sep 12 06:04:17.058332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 06:04:17.058466 kubelet[2383]: I0912 06:04:17.058315 2383 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 06:04:17.078215 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 06:04:17.078501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 06:04:17.078555 systemd[1]: kubelet.service: Consumed 950ms CPU time, 132.1M memory peak. Sep 12 06:04:17.081401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 06:04:17.347442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 06:04:17.353134 (kubelet)[2758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 06:04:17.409665 kubelet[2758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 06:04:17.409665 kubelet[2758]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 06:04:17.409665 kubelet[2758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 06:04:17.409665 kubelet[2758]: I0912 06:04:17.409402 2758 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 06:04:17.418348 kubelet[2758]: I0912 06:04:17.418287 2758 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 06:04:17.418348 kubelet[2758]: I0912 06:04:17.418328 2758 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 06:04:17.419064 kubelet[2758]: I0912 06:04:17.418915 2758 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 06:04:17.420460 kubelet[2758]: I0912 06:04:17.420428 2758 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 06:04:17.422916 kubelet[2758]: I0912 06:04:17.422877 2758 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 06:04:17.426678 kubelet[2758]: I0912 06:04:17.426628 2758 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 06:04:17.432508 kubelet[2758]: I0912 06:04:17.432480 2758 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 06:04:17.432784 kubelet[2758]: I0912 06:04:17.432752 2758 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 06:04:17.432937 kubelet[2758]: I0912 06:04:17.432781 2758 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 06:04:17.433036 kubelet[2758]: I0912 06:04:17.432942 2758 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 06:04:17.433036 kubelet[2758]: I0912 06:04:17.432951 2758 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 06:04:17.433036 kubelet[2758]: I0912 06:04:17.433012 2758 state_mem.go:36] "Initialized new in-memory state store" Sep 12 06:04:17.433204 kubelet[2758]: I0912 06:04:17.433186 2758 kubelet.go:480] "Attempting to sync node with API server" Sep 12 06:04:17.433239 kubelet[2758]: I0912 06:04:17.433207 2758 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 06:04:17.433239 kubelet[2758]: I0912 06:04:17.433231 2758 kubelet.go:386] "Adding apiserver pod source" Sep 12 06:04:17.433300 kubelet[2758]: I0912 06:04:17.433248 2758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 06:04:17.436491 kubelet[2758]: I0912 06:04:17.436461 2758 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 06:04:17.440652 kubelet[2758]: I0912 06:04:17.439437 2758 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 06:04:17.443496 kubelet[2758]: I0912 06:04:17.443477 2758 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 06:04:17.443737 kubelet[2758]: I0912 06:04:17.443701 2758 server.go:1289] "Started kubelet" Sep 12 06:04:17.446553 kubelet[2758]: I0912 06:04:17.446488 2758 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 06:04:17.446771 kubelet[2758]: I0912 06:04:17.446679 2758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 06:04:17.447069 kubelet[2758]: I0912 06:04:17.446873 2758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 06:04:17.448651 kubelet[2758]: I0912 06:04:17.448592 2758 server.go:317] "Adding debug handlers to kubelet server" Sep 12 06:04:17.449054 kubelet[2758]: I0912 06:04:17.449023 2758 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 06:04:17.449537 kubelet[2758]: I0912 06:04:17.449509 2758 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 06:04:17.449770 kubelet[2758]: I0912 06:04:17.449720 2758 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 06:04:17.451550 kubelet[2758]: I0912 06:04:17.451521 2758 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 06:04:17.451698 kubelet[2758]: I0912 06:04:17.451676 2758 reconciler.go:26] "Reconciler: start to sync state" Sep 12 06:04:17.455335 kubelet[2758]: E0912 06:04:17.455080 2758 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 06:04:17.456311 kubelet[2758]: I0912 06:04:17.456282 2758 factory.go:223] Registration of the systemd container factory successfully Sep 12 06:04:17.456425 kubelet[2758]: I0912 06:04:17.456400 2758 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 06:04:17.457986 kubelet[2758]: I0912 06:04:17.457961 2758 factory.go:223] Registration of the containerd container factory successfully Sep 12 06:04:17.458382 kubelet[2758]: I0912 06:04:17.458347 2758 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 06:04:17.470485 kubelet[2758]: I0912 06:04:17.470117 2758 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 06:04:17.470485 kubelet[2758]: I0912 06:04:17.470150 2758 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 06:04:17.470485 kubelet[2758]: I0912 06:04:17.470171 2758 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 06:04:17.470485 kubelet[2758]: I0912 06:04:17.470179 2758 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 06:04:17.470485 kubelet[2758]: E0912 06:04:17.470223 2758 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 06:04:17.504665 kubelet[2758]: I0912 06:04:17.504592 2758 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 06:04:17.504665 kubelet[2758]: I0912 06:04:17.504621 2758 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 06:04:17.504665 kubelet[2758]: I0912 06:04:17.504666 2758 state_mem.go:36] "Initialized new in-memory state store" Sep 12 06:04:17.504842 kubelet[2758]: I0912 06:04:17.504812 2758 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 06:04:17.504842 kubelet[2758]: I0912 06:04:17.504825 2758 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 06:04:17.504886 kubelet[2758]: I0912 06:04:17.504855 2758 policy_none.go:49] "None policy: Start" Sep 12 06:04:17.504886 kubelet[2758]: I0912 06:04:17.504872 2758 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 06:04:17.504933 kubelet[2758]: I0912 06:04:17.504887 2758 state_mem.go:35] "Initializing new in-memory state store" Sep 12 06:04:17.504996 kubelet[2758]: I0912 06:04:17.504976 2758 state_mem.go:75] "Updated machine memory state" Sep 12 06:04:17.508997 kubelet[2758]: E0912 06:04:17.508900 2758 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 06:04:17.509110 kubelet[2758]: I0912 06:04:17.509086 2758 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 06:04:17.509145 kubelet[2758]: I0912 06:04:17.509116 2758 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 06:04:17.509462 kubelet[2758]: I0912 06:04:17.509378 2758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 06:04:17.510553 kubelet[2758]: E0912 06:04:17.510530 2758 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 06:04:17.571096 kubelet[2758]: I0912 06:04:17.571046 2758 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:17.571096 kubelet[2758]: I0912 06:04:17.571083 2758 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 06:04:17.571248 kubelet[2758]: I0912 06:04:17.571128 2758 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:17.613914 kubelet[2758]: I0912 06:04:17.613797 2758 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 06:04:17.653440 kubelet[2758]: I0912 06:04:17.653410 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e02a8d4ea24e1797074fd2a57060163-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e02a8d4ea24e1797074fd2a57060163\") " pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:17.653440 kubelet[2758]: I0912 06:04:17.653439 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e02a8d4ea24e1797074fd2a57060163-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e02a8d4ea24e1797074fd2a57060163\") " pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:17.653581 kubelet[2758]: I0912 06:04:17.653460 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e02a8d4ea24e1797074fd2a57060163-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e02a8d4ea24e1797074fd2a57060163\") " pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:17.653581 kubelet[2758]: I0912 06:04:17.653479 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:17.653581 kubelet[2758]: I0912 06:04:17.653494 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 06:04:17.653581 kubelet[2758]: I0912 06:04:17.653525 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:17.653581 kubelet[2758]: I0912 06:04:17.653558 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:17.653746 kubelet[2758]: I0912 06:04:17.653574 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:17.653746 kubelet[2758]: I0912 06:04:17.653589 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:17.787571 kubelet[2758]: E0912 06:04:17.787519 2758 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:17.789162 kubelet[2758]: I0912 06:04:17.789133 2758 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 06:04:17.789743 kubelet[2758]: I0912 06:04:17.789204 2758 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 06:04:18.434219 kubelet[2758]: I0912 06:04:18.434175 2758 apiserver.go:52] "Watching apiserver" Sep 12 06:04:18.452218 kubelet[2758]: I0912 06:04:18.452178 2758 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 06:04:18.485503 kubelet[2758]: I0912 06:04:18.485464 2758 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:18.485617 kubelet[2758]: I0912 06:04:18.485544 2758 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:19.014661 kubelet[2758]: I0912 06:04:19.014211 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.01418102 podStartE2EDuration="2.01418102s" podCreationTimestamp="2025-09-12 06:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 06:04:18.459855152 +0000 UTC m=+1.101022879" watchObservedRunningTime="2025-09-12 06:04:19.01418102 +0000 UTC m=+1.655348727" Sep 12 06:04:19.039939 kubelet[2758]: E0912 06:04:19.039884 2758 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 06:04:19.040167 kubelet[2758]: E0912 06:04:19.040127 2758 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 06:04:19.709497 kubelet[2758]: I0912 06:04:19.709437 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.7094203930000003 podStartE2EDuration="2.709420393s" podCreationTimestamp="2025-09-12 06:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 06:04:19.013876688 +0000 UTC m=+1.655044405" watchObservedRunningTime="2025-09-12 06:04:19.709420393 +0000 UTC m=+2.350588101" Sep 12 06:04:19.768673 kubelet[2758]: I0912 06:04:19.768561 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.768543397 podStartE2EDuration="3.768543397s" podCreationTimestamp="2025-09-12 06:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 06:04:19.709662416 +0000 UTC m=+2.350830134" watchObservedRunningTime="2025-09-12 06:04:19.768543397 +0000 UTC m=+2.409711114" Sep 12 06:04:21.879647 kubelet[2758]: I0912 06:04:21.879611 2758 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 06:04:21.880161 containerd[1602]: time="2025-09-12T06:04:21.879952398Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 06:04:21.880470 kubelet[2758]: I0912 06:04:21.880184 2758 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 06:04:22.934599 systemd[1]: Created slice kubepods-besteffort-pod378b728a_f96f_465c_8479_e71a3510a6a9.slice - libcontainer container kubepods-besteffort-pod378b728a_f96f_465c_8479_e71a3510a6a9.slice. Sep 12 06:04:22.990086 kubelet[2758]: I0912 06:04:22.990022 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/378b728a-f96f-465c-8479-e71a3510a6a9-xtables-lock\") pod \"kube-proxy-smccj\" (UID: \"378b728a-f96f-465c-8479-e71a3510a6a9\") " pod="kube-system/kube-proxy-smccj" Sep 12 06:04:22.990086 kubelet[2758]: I0912 06:04:22.990070 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/378b728a-f96f-465c-8479-e71a3510a6a9-lib-modules\") pod \"kube-proxy-smccj\" (UID: \"378b728a-f96f-465c-8479-e71a3510a6a9\") " pod="kube-system/kube-proxy-smccj" Sep 12 06:04:22.990506 kubelet[2758]: I0912 06:04:22.990098 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzkxp\" (UniqueName: \"kubernetes.io/projected/378b728a-f96f-465c-8479-e71a3510a6a9-kube-api-access-dzkxp\") pod \"kube-proxy-smccj\" (UID: \"378b728a-f96f-465c-8479-e71a3510a6a9\") " pod="kube-system/kube-proxy-smccj" Sep 12 06:04:22.990506 kubelet[2758]: I0912 06:04:22.990120 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/378b728a-f96f-465c-8479-e71a3510a6a9-kube-proxy\") pod \"kube-proxy-smccj\" (UID: \"378b728a-f96f-465c-8479-e71a3510a6a9\") " pod="kube-system/kube-proxy-smccj" Sep 12 06:04:23.922578 systemd[1]: Created slice kubepods-besteffort-pod6123ff03_e752_4d0f_b75d_19d1dc62b5ae.slice - libcontainer container kubepods-besteffort-pod6123ff03_e752_4d0f_b75d_19d1dc62b5ae.slice. Sep 12 06:04:23.998250 kubelet[2758]: I0912 06:04:23.998183 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6123ff03-e752-4d0f-b75d-19d1dc62b5ae-var-lib-calico\") pod \"tigera-operator-755d956888-65mkk\" (UID: \"6123ff03-e752-4d0f-b75d-19d1dc62b5ae\") " pod="tigera-operator/tigera-operator-755d956888-65mkk" Sep 12 06:04:23.998250 kubelet[2758]: I0912 06:04:23.998234 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r792\" (UniqueName: \"kubernetes.io/projected/6123ff03-e752-4d0f-b75d-19d1dc62b5ae-kube-api-access-2r792\") pod \"tigera-operator-755d956888-65mkk\" (UID: \"6123ff03-e752-4d0f-b75d-19d1dc62b5ae\") " pod="tigera-operator/tigera-operator-755d956888-65mkk" Sep 12 06:04:24.148747 containerd[1602]: time="2025-09-12T06:04:24.148689099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smccj,Uid:378b728a-f96f-465c-8479-e71a3510a6a9,Namespace:kube-system,Attempt:0,}" Sep 12 06:04:24.168061 containerd[1602]: time="2025-09-12T06:04:24.168017210Z" level=info msg="connecting to shim e311b356e27c647577bf605b11a5a5df09ce0aa56d888152316b2d219a37bbb0" address="unix:///run/containerd/s/143ef529ddbc77171fe29e1f680f73f05aa0897c20b2ea5c0d6e566b7b101c47" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:24.203769 systemd[1]: Started cri-containerd-e311b356e27c647577bf605b11a5a5df09ce0aa56d888152316b2d219a37bbb0.scope - libcontainer container e311b356e27c647577bf605b11a5a5df09ce0aa56d888152316b2d219a37bbb0. Sep 12 06:04:24.226674 containerd[1602]: time="2025-09-12T06:04:24.226610622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-65mkk,Uid:6123ff03-e752-4d0f-b75d-19d1dc62b5ae,Namespace:tigera-operator,Attempt:0,}" Sep 12 06:04:24.229457 containerd[1602]: time="2025-09-12T06:04:24.229430548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smccj,Uid:378b728a-f96f-465c-8479-e71a3510a6a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e311b356e27c647577bf605b11a5a5df09ce0aa56d888152316b2d219a37bbb0\"" Sep 12 06:04:24.234768 containerd[1602]: time="2025-09-12T06:04:24.234731636Z" level=info msg="CreateContainer within sandbox \"e311b356e27c647577bf605b11a5a5df09ce0aa56d888152316b2d219a37bbb0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 06:04:24.250610 containerd[1602]: time="2025-09-12T06:04:24.250573001Z" level=info msg="Container 4eb4e571358a3c93598c924a80e6c66137b951fa8e51c5f4af428e4e32efff83: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:24.253044 containerd[1602]: time="2025-09-12T06:04:24.252891574Z" level=info msg="connecting to shim 4d967e3acf2ed585b160b369d447a32da57da8c91d23b1c640f6b78d4df8cbc5" address="unix:///run/containerd/s/24ffbaeefef81308083a52d184b5998b7094d519d3899ea25d3c9754bf6317ca" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:24.259260 containerd[1602]: time="2025-09-12T06:04:24.259211480Z" level=info msg="CreateContainer within sandbox \"e311b356e27c647577bf605b11a5a5df09ce0aa56d888152316b2d219a37bbb0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4eb4e571358a3c93598c924a80e6c66137b951fa8e51c5f4af428e4e32efff83\"" Sep 12 06:04:24.260909 containerd[1602]: time="2025-09-12T06:04:24.260873553Z" level=info msg="StartContainer for \"4eb4e571358a3c93598c924a80e6c66137b951fa8e51c5f4af428e4e32efff83\"" Sep 12 06:04:24.262548 containerd[1602]: time="2025-09-12T06:04:24.262498114Z" level=info msg="connecting to shim 4eb4e571358a3c93598c924a80e6c66137b951fa8e51c5f4af428e4e32efff83" address="unix:///run/containerd/s/143ef529ddbc77171fe29e1f680f73f05aa0897c20b2ea5c0d6e566b7b101c47" protocol=ttrpc version=3 Sep 12 06:04:24.278838 systemd[1]: Started cri-containerd-4d967e3acf2ed585b160b369d447a32da57da8c91d23b1c640f6b78d4df8cbc5.scope - libcontainer container 4d967e3acf2ed585b160b369d447a32da57da8c91d23b1c640f6b78d4df8cbc5. Sep 12 06:04:24.282275 systemd[1]: Started cri-containerd-4eb4e571358a3c93598c924a80e6c66137b951fa8e51c5f4af428e4e32efff83.scope - libcontainer container 4eb4e571358a3c93598c924a80e6c66137b951fa8e51c5f4af428e4e32efff83. Sep 12 06:04:24.327629 containerd[1602]: time="2025-09-12T06:04:24.327574814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-65mkk,Uid:6123ff03-e752-4d0f-b75d-19d1dc62b5ae,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4d967e3acf2ed585b160b369d447a32da57da8c91d23b1c640f6b78d4df8cbc5\"" Sep 12 06:04:24.329979 containerd[1602]: time="2025-09-12T06:04:24.329946668Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 06:04:24.340678 containerd[1602]: time="2025-09-12T06:04:24.339522009Z" level=info msg="StartContainer for \"4eb4e571358a3c93598c924a80e6c66137b951fa8e51c5f4af428e4e32efff83\" returns successfully" Sep 12 06:04:24.512690 kubelet[2758]: I0912 06:04:24.512512 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-smccj" podStartSLOduration=2.5124930599999997 podStartE2EDuration="2.51249306s" podCreationTimestamp="2025-09-12 06:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 06:04:24.512385745 +0000 UTC m=+7.153553452" watchObservedRunningTime="2025-09-12 06:04:24.51249306 +0000 UTC m=+7.153660777" Sep 12 06:04:25.767577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003093325.mount: Deactivated successfully. Sep 12 06:04:26.107380 containerd[1602]: time="2025-09-12T06:04:26.107251754Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:26.108065 containerd[1602]: time="2025-09-12T06:04:26.108016888Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 06:04:26.109175 containerd[1602]: time="2025-09-12T06:04:26.109142547Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:26.111071 containerd[1602]: time="2025-09-12T06:04:26.111041123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:26.111654 containerd[1602]: time="2025-09-12T06:04:26.111597741Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.781620405s" Sep 12 06:04:26.111692 containerd[1602]: time="2025-09-12T06:04:26.111655861Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 06:04:26.116790 containerd[1602]: time="2025-09-12T06:04:26.116757773Z" level=info msg="CreateContainer within sandbox \"4d967e3acf2ed585b160b369d447a32da57da8c91d23b1c640f6b78d4df8cbc5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 06:04:26.124846 containerd[1602]: time="2025-09-12T06:04:26.124817243Z" level=info msg="Container e4f86d4eb7ee39558f4c0764e5cbbf1364733102c170d12fa4fee10661e29586: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:26.134812 containerd[1602]: time="2025-09-12T06:04:26.134749452Z" level=info msg="CreateContainer within sandbox \"4d967e3acf2ed585b160b369d447a32da57da8c91d23b1c640f6b78d4df8cbc5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e4f86d4eb7ee39558f4c0764e5cbbf1364733102c170d12fa4fee10661e29586\"" Sep 12 06:04:26.137170 containerd[1602]: time="2025-09-12T06:04:26.136978596Z" level=info msg="StartContainer for \"e4f86d4eb7ee39558f4c0764e5cbbf1364733102c170d12fa4fee10661e29586\"" Sep 12 06:04:26.138336 containerd[1602]: time="2025-09-12T06:04:26.138286170Z" level=info msg="connecting to shim e4f86d4eb7ee39558f4c0764e5cbbf1364733102c170d12fa4fee10661e29586" address="unix:///run/containerd/s/24ffbaeefef81308083a52d184b5998b7094d519d3899ea25d3c9754bf6317ca" protocol=ttrpc version=3 Sep 12 06:04:26.200787 systemd[1]: Started cri-containerd-e4f86d4eb7ee39558f4c0764e5cbbf1364733102c170d12fa4fee10661e29586.scope - libcontainer container e4f86d4eb7ee39558f4c0764e5cbbf1364733102c170d12fa4fee10661e29586. Sep 12 06:04:26.231489 containerd[1602]: time="2025-09-12T06:04:26.231451083Z" level=info msg="StartContainer for \"e4f86d4eb7ee39558f4c0764e5cbbf1364733102c170d12fa4fee10661e29586\" returns successfully" Sep 12 06:04:26.521692 kubelet[2758]: I0912 06:04:26.521394 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-65mkk" podStartSLOduration=2.738026314 podStartE2EDuration="4.521377119s" podCreationTimestamp="2025-09-12 06:04:22 +0000 UTC" firstStartedPulling="2025-09-12 06:04:24.328913601 +0000 UTC m=+6.970081319" lastFinishedPulling="2025-09-12 06:04:26.112264417 +0000 UTC m=+8.753432124" observedRunningTime="2025-09-12 06:04:26.521196816 +0000 UTC m=+9.162364533" watchObservedRunningTime="2025-09-12 06:04:26.521377119 +0000 UTC m=+9.162544836" Sep 12 06:04:28.122787 update_engine[1582]: I20250912 06:04:28.122699 1582 update_attempter.cc:509] Updating boot flags... Sep 12 06:04:31.526083 sudo[1808]: pam_unix(sudo:session): session closed for user root Sep 12 06:04:31.528151 sshd[1807]: Connection closed by 10.0.0.1 port 46108 Sep 12 06:04:31.529682 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Sep 12 06:04:31.536895 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Sep 12 06:04:31.537099 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:46108.service: Deactivated successfully. Sep 12 06:04:31.541475 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 06:04:31.541765 systemd[1]: session-7.scope: Consumed 5.582s CPU time, 229.6M memory peak. Sep 12 06:04:31.544792 systemd-logind[1580]: Removed session 7. Sep 12 06:04:34.154806 systemd[1]: Created slice kubepods-besteffort-pod7d7f165f_f517_465c_9a6a_843c99cf8e71.slice - libcontainer container kubepods-besteffort-pod7d7f165f_f517_465c_9a6a_843c99cf8e71.slice. Sep 12 06:04:34.171186 kubelet[2758]: I0912 06:04:34.170935 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d7f165f-f517-465c-9a6a-843c99cf8e71-tigera-ca-bundle\") pod \"calico-typha-847cb5c78-mlc22\" (UID: \"7d7f165f-f517-465c-9a6a-843c99cf8e71\") " pod="calico-system/calico-typha-847cb5c78-mlc22" Sep 12 06:04:34.171676 kubelet[2758]: I0912 06:04:34.171200 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28w8r\" (UniqueName: \"kubernetes.io/projected/7d7f165f-f517-465c-9a6a-843c99cf8e71-kube-api-access-28w8r\") pod \"calico-typha-847cb5c78-mlc22\" (UID: \"7d7f165f-f517-465c-9a6a-843c99cf8e71\") " pod="calico-system/calico-typha-847cb5c78-mlc22" Sep 12 06:04:34.171676 kubelet[2758]: I0912 06:04:34.171226 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d7f165f-f517-465c-9a6a-843c99cf8e71-typha-certs\") pod \"calico-typha-847cb5c78-mlc22\" (UID: \"7d7f165f-f517-465c-9a6a-843c99cf8e71\") " pod="calico-system/calico-typha-847cb5c78-mlc22" Sep 12 06:04:34.399842 systemd[1]: Created slice kubepods-besteffort-pod36fd7be4_628b_4cd8_b8cb_5947b73fa72e.slice - libcontainer container kubepods-besteffort-pod36fd7be4_628b_4cd8_b8cb_5947b73fa72e.slice. Sep 12 06:04:34.462669 containerd[1602]: time="2025-09-12T06:04:34.462040140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847cb5c78-mlc22,Uid:7d7f165f-f517-465c-9a6a-843c99cf8e71,Namespace:calico-system,Attempt:0,}" Sep 12 06:04:34.473351 kubelet[2758]: I0912 06:04:34.473313 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-cni-bin-dir\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473351 kubelet[2758]: I0912 06:04:34.473355 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-lib-modules\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473474 kubelet[2758]: I0912 06:04:34.473373 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-var-lib-calico\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473474 kubelet[2758]: I0912 06:04:34.473388 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-policysync\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473474 kubelet[2758]: I0912 06:04:34.473401 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-tigera-ca-bundle\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473474 kubelet[2758]: I0912 06:04:34.473415 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9xs\" (UniqueName: \"kubernetes.io/projected/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-kube-api-access-sx9xs\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473474 kubelet[2758]: I0912 06:04:34.473431 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-cni-net-dir\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473590 kubelet[2758]: I0912 06:04:34.473449 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-var-run-calico\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473590 kubelet[2758]: I0912 06:04:34.473463 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-xtables-lock\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473590 kubelet[2758]: I0912 06:04:34.473477 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-cni-log-dir\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473590 kubelet[2758]: I0912 06:04:34.473492 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-flexvol-driver-host\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.473590 kubelet[2758]: I0912 06:04:34.473506 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/36fd7be4-628b-4cd8-b8cb-5947b73fa72e-node-certs\") pod \"calico-node-blrdx\" (UID: \"36fd7be4-628b-4cd8-b8cb-5947b73fa72e\") " pod="calico-system/calico-node-blrdx" Sep 12 06:04:34.499012 containerd[1602]: time="2025-09-12T06:04:34.498969587Z" level=info msg="connecting to shim 5fe433377cd7d96bbd006ec997ebdffae994927c250da54a529b58259d44b40d" address="unix:///run/containerd/s/8b0ed57581332b6123e972ee1760a6f695ec1f24094ad49043e4e2d80c451ac4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:34.535766 systemd[1]: Started cri-containerd-5fe433377cd7d96bbd006ec997ebdffae994927c250da54a529b58259d44b40d.scope - libcontainer container 5fe433377cd7d96bbd006ec997ebdffae994927c250da54a529b58259d44b40d. Sep 12 06:04:34.580883 kubelet[2758]: E0912 06:04:34.580838 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:34.580883 kubelet[2758]: W0912 06:04:34.580860 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:34.581699 kubelet[2758]: E0912 06:04:34.581677 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:34.583451 kubelet[2758]: E0912 06:04:34.583430 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:34.583451 kubelet[2758]: W0912 06:04:34.583448 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:34.583567 kubelet[2758]: E0912 06:04:34.583468 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:34.703614 containerd[1602]: time="2025-09-12T06:04:34.703537104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-blrdx,Uid:36fd7be4-628b-4cd8-b8cb-5947b73fa72e,Namespace:calico-system,Attempt:0,}" Sep 12 06:04:34.706933 containerd[1602]: time="2025-09-12T06:04:34.706898898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847cb5c78-mlc22,Uid:7d7f165f-f517-465c-9a6a-843c99cf8e71,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fe433377cd7d96bbd006ec997ebdffae994927c250da54a529b58259d44b40d\"" Sep 12 06:04:34.708063 containerd[1602]: time="2025-09-12T06:04:34.708035025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 06:04:35.053674 kubelet[2758]: E0912 06:04:35.052314 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jggbx" podUID="f79f1c00-3e42-40ed-bb81-e1034be446c4" Sep 12 06:04:35.056121 containerd[1602]: time="2025-09-12T06:04:35.056065286Z" level=info msg="connecting to shim 1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a" address="unix:///run/containerd/s/0c58d70909f3d9761095e3107990d9ea84ea242128efe52767af6fa6654e625a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:35.062772 kubelet[2758]: E0912 06:04:35.062729 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.062852 kubelet[2758]: W0912 06:04:35.062759 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.062892 kubelet[2758]: E0912 06:04:35.062839 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.063287 kubelet[2758]: E0912 06:04:35.063258 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.063287 kubelet[2758]: W0912 06:04:35.063272 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.063287 kubelet[2758]: E0912 06:04:35.063283 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.063660 kubelet[2758]: E0912 06:04:35.063620 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.063660 kubelet[2758]: W0912 06:04:35.063651 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.063729 kubelet[2758]: E0912 06:04:35.063663 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.064348 kubelet[2758]: E0912 06:04:35.064322 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.064653 kubelet[2758]: W0912 06:04:35.064410 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.064653 kubelet[2758]: E0912 06:04:35.064445 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.065904 kubelet[2758]: E0912 06:04:35.065856 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.065904 kubelet[2758]: W0912 06:04:35.065877 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.065904 kubelet[2758]: E0912 06:04:35.065891 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.067460 kubelet[2758]: E0912 06:04:35.067216 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.067460 kubelet[2758]: W0912 06:04:35.067233 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.067460 kubelet[2758]: E0912 06:04:35.067243 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.068393 kubelet[2758]: E0912 06:04:35.068360 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.068393 kubelet[2758]: W0912 06:04:35.068383 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.068393 kubelet[2758]: E0912 06:04:35.068396 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.069693 kubelet[2758]: E0912 06:04:35.068972 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.069693 kubelet[2758]: W0912 06:04:35.068990 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.069693 kubelet[2758]: E0912 06:04:35.069002 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.069956 kubelet[2758]: E0912 06:04:35.069919 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.069956 kubelet[2758]: W0912 06:04:35.069946 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.070035 kubelet[2758]: E0912 06:04:35.069971 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.071760 kubelet[2758]: E0912 06:04:35.071727 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.071804 kubelet[2758]: W0912 06:04:35.071785 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.071804 kubelet[2758]: E0912 06:04:35.071798 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.072492 kubelet[2758]: E0912 06:04:35.072456 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.072492 kubelet[2758]: W0912 06:04:35.072473 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.072492 kubelet[2758]: E0912 06:04:35.072482 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.074374 kubelet[2758]: E0912 06:04:35.073885 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.074374 kubelet[2758]: W0912 06:04:35.073901 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.074374 kubelet[2758]: E0912 06:04:35.073911 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.075718 kubelet[2758]: E0912 06:04:35.075647 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.075718 kubelet[2758]: W0912 06:04:35.075664 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.075718 kubelet[2758]: E0912 06:04:35.075675 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.075921 kubelet[2758]: E0912 06:04:35.075893 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.075921 kubelet[2758]: W0912 06:04:35.075909 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.075921 kubelet[2758]: E0912 06:04:35.075918 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.076100 kubelet[2758]: E0912 06:04:35.076084 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.076100 kubelet[2758]: W0912 06:04:35.076098 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.076155 kubelet[2758]: E0912 06:04:35.076107 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.076380 kubelet[2758]: E0912 06:04:35.076351 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.076380 kubelet[2758]: W0912 06:04:35.076366 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.076380 kubelet[2758]: E0912 06:04:35.076375 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.076705 kubelet[2758]: E0912 06:04:35.076686 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.076705 kubelet[2758]: W0912 06:04:35.076699 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.076757 kubelet[2758]: E0912 06:04:35.076709 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.076893 kubelet[2758]: E0912 06:04:35.076875 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.076893 kubelet[2758]: W0912 06:04:35.076886 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.076939 kubelet[2758]: E0912 06:04:35.076895 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.077412 kubelet[2758]: E0912 06:04:35.077385 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.077412 kubelet[2758]: W0912 06:04:35.077400 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.077412 kubelet[2758]: E0912 06:04:35.077410 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.078557 kubelet[2758]: E0912 06:04:35.077798 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.078557 kubelet[2758]: W0912 06:04:35.077813 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.078557 kubelet[2758]: E0912 06:04:35.077823 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.080044 kubelet[2758]: E0912 06:04:35.080017 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.080044 kubelet[2758]: W0912 06:04:35.080034 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.080044 kubelet[2758]: E0912 06:04:35.080044 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.080674 kubelet[2758]: I0912 06:04:35.080621 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f79f1c00-3e42-40ed-bb81-e1034be446c4-registration-dir\") pod \"csi-node-driver-jggbx\" (UID: \"f79f1c00-3e42-40ed-bb81-e1034be446c4\") " pod="calico-system/csi-node-driver-jggbx" Sep 12 06:04:35.082033 kubelet[2758]: E0912 06:04:35.081831 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.082033 kubelet[2758]: W0912 06:04:35.081861 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.082033 kubelet[2758]: E0912 06:04:35.081891 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.082033 kubelet[2758]: I0912 06:04:35.081934 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f79f1c00-3e42-40ed-bb81-e1034be446c4-socket-dir\") pod \"csi-node-driver-jggbx\" (UID: \"f79f1c00-3e42-40ed-bb81-e1034be446c4\") " pod="calico-system/csi-node-driver-jggbx" Sep 12 06:04:35.082744 kubelet[2758]: E0912 06:04:35.082680 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.083241 kubelet[2758]: W0912 06:04:35.083223 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.083313 kubelet[2758]: E0912 06:04:35.083302 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.083436 kubelet[2758]: I0912 06:04:35.083406 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f79f1c00-3e42-40ed-bb81-e1034be446c4-kubelet-dir\") pod \"csi-node-driver-jggbx\" (UID: \"f79f1c00-3e42-40ed-bb81-e1034be446c4\") " pod="calico-system/csi-node-driver-jggbx" Sep 12 06:04:35.083830 kubelet[2758]: E0912 06:04:35.083703 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.083916 kubelet[2758]: W0912 06:04:35.083876 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.083916 kubelet[2758]: E0912 06:04:35.083898 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.084455 kubelet[2758]: E0912 06:04:35.084431 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.084540 kubelet[2758]: W0912 06:04:35.084514 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.084540 kubelet[2758]: E0912 06:04:35.084528 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.084930 kubelet[2758]: E0912 06:04:35.084910 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.085042 kubelet[2758]: W0912 06:04:35.085002 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.085042 kubelet[2758]: E0912 06:04:35.085015 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.085268 kubelet[2758]: I0912 06:04:35.085250 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f79f1c00-3e42-40ed-bb81-e1034be446c4-varrun\") pod \"csi-node-driver-jggbx\" (UID: \"f79f1c00-3e42-40ed-bb81-e1034be446c4\") " pod="calico-system/csi-node-driver-jggbx" Sep 12 06:04:35.085878 kubelet[2758]: E0912 06:04:35.085864 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.085939 kubelet[2758]: W0912 06:04:35.085928 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.086019 kubelet[2758]: E0912 06:04:35.085990 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.086821 kubelet[2758]: E0912 06:04:35.086806 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.086916 kubelet[2758]: W0912 06:04:35.086885 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.086916 kubelet[2758]: E0912 06:04:35.086901 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.087978 kubelet[2758]: E0912 06:04:35.087925 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.087978 kubelet[2758]: W0912 06:04:35.087937 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.087978 kubelet[2758]: E0912 06:04:35.087946 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.088561 kubelet[2758]: E0912 06:04:35.088501 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.088561 kubelet[2758]: W0912 06:04:35.088519 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.088561 kubelet[2758]: E0912 06:04:35.088528 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.089212 kubelet[2758]: E0912 06:04:35.089191 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.089345 kubelet[2758]: W0912 06:04:35.089267 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.089345 kubelet[2758]: E0912 06:04:35.089332 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.090105 kubelet[2758]: E0912 06:04:35.089924 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.090105 kubelet[2758]: W0912 06:04:35.089936 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.090105 kubelet[2758]: E0912 06:04:35.089947 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.090105 kubelet[2758]: I0912 06:04:35.090084 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sj7c\" (UniqueName: \"kubernetes.io/projected/f79f1c00-3e42-40ed-bb81-e1034be446c4-kube-api-access-6sj7c\") pod \"csi-node-driver-jggbx\" (UID: \"f79f1c00-3e42-40ed-bb81-e1034be446c4\") " pod="calico-system/csi-node-driver-jggbx" Sep 12 06:04:35.090446 kubelet[2758]: E0912 06:04:35.090410 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.090446 kubelet[2758]: W0912 06:04:35.090422 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.090446 kubelet[2758]: E0912 06:04:35.090432 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.091308 kubelet[2758]: E0912 06:04:35.091187 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.091308 kubelet[2758]: W0912 06:04:35.091199 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.091308 kubelet[2758]: E0912 06:04:35.091210 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.091507 kubelet[2758]: E0912 06:04:35.091469 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.091507 kubelet[2758]: W0912 06:04:35.091480 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.091507 kubelet[2758]: E0912 06:04:35.091490 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.091932 systemd[1]: Started cri-containerd-1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a.scope - libcontainer container 1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a. Sep 12 06:04:35.128950 containerd[1602]: time="2025-09-12T06:04:35.128899989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-blrdx,Uid:36fd7be4-628b-4cd8-b8cb-5947b73fa72e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a\"" Sep 12 06:04:35.192129 kubelet[2758]: E0912 06:04:35.192089 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.192129 kubelet[2758]: W0912 06:04:35.192115 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.192129 kubelet[2758]: E0912 06:04:35.192139 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.192677 kubelet[2758]: E0912 06:04:35.192424 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.192677 kubelet[2758]: W0912 06:04:35.192447 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.192677 kubelet[2758]: E0912 06:04:35.192474 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.192876 kubelet[2758]: E0912 06:04:35.192862 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.192876 kubelet[2758]: W0912 06:04:35.192873 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.192929 kubelet[2758]: E0912 06:04:35.192882 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.193091 kubelet[2758]: E0912 06:04:35.193078 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.193091 kubelet[2758]: W0912 06:04:35.193088 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.193143 kubelet[2758]: E0912 06:04:35.193096 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.193368 kubelet[2758]: E0912 06:04:35.193339 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.193368 kubelet[2758]: W0912 06:04:35.193356 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.193415 kubelet[2758]: E0912 06:04:35.193370 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.193604 kubelet[2758]: E0912 06:04:35.193579 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.193604 kubelet[2758]: W0912 06:04:35.193598 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.193661 kubelet[2758]: E0912 06:04:35.193607 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.193829 kubelet[2758]: E0912 06:04:35.193804 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.193829 kubelet[2758]: W0912 06:04:35.193825 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.193881 kubelet[2758]: E0912 06:04:35.193835 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.194238 kubelet[2758]: E0912 06:04:35.194198 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.194238 kubelet[2758]: W0912 06:04:35.194228 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.194290 kubelet[2758]: E0912 06:04:35.194250 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.194462 kubelet[2758]: E0912 06:04:35.194446 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.194462 kubelet[2758]: W0912 06:04:35.194458 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.194526 kubelet[2758]: E0912 06:04:35.194466 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.194706 kubelet[2758]: E0912 06:04:35.194676 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.194706 kubelet[2758]: W0912 06:04:35.194688 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.194706 kubelet[2758]: E0912 06:04:35.194696 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.195093 kubelet[2758]: E0912 06:04:35.194860 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.195093 kubelet[2758]: W0912 06:04:35.194868 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.195093 kubelet[2758]: E0912 06:04:35.194875 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.195093 kubelet[2758]: E0912 06:04:35.195061 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.195093 kubelet[2758]: W0912 06:04:35.195068 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.195093 kubelet[2758]: E0912 06:04:35.195076 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.196623 kubelet[2758]: E0912 06:04:35.196590 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.196623 kubelet[2758]: W0912 06:04:35.196604 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.196623 kubelet[2758]: E0912 06:04:35.196614 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.196817 kubelet[2758]: E0912 06:04:35.196800 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.196817 kubelet[2758]: W0912 06:04:35.196811 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.196865 kubelet[2758]: E0912 06:04:35.196819 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.196996 kubelet[2758]: E0912 06:04:35.196978 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.196996 kubelet[2758]: W0912 06:04:35.196989 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.197054 kubelet[2758]: E0912 06:04:35.196997 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.197208 kubelet[2758]: E0912 06:04:35.197182 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.197208 kubelet[2758]: W0912 06:04:35.197193 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.197208 kubelet[2758]: E0912 06:04:35.197203 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.197398 kubelet[2758]: E0912 06:04:35.197380 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.197398 kubelet[2758]: W0912 06:04:35.197390 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.197398 kubelet[2758]: E0912 06:04:35.197398 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.197613 kubelet[2758]: E0912 06:04:35.197594 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.197613 kubelet[2758]: W0912 06:04:35.197605 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.197613 kubelet[2758]: E0912 06:04:35.197614 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.197835 kubelet[2758]: E0912 06:04:35.197818 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.197835 kubelet[2758]: W0912 06:04:35.197829 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.197835 kubelet[2758]: E0912 06:04:35.197837 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.198056 kubelet[2758]: E0912 06:04:35.198039 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.198056 kubelet[2758]: W0912 06:04:35.198051 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.198106 kubelet[2758]: E0912 06:04:35.198059 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.198256 kubelet[2758]: E0912 06:04:35.198241 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.198256 kubelet[2758]: W0912 06:04:35.198251 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.198256 kubelet[2758]: E0912 06:04:35.198259 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.198481 kubelet[2758]: E0912 06:04:35.198462 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.198481 kubelet[2758]: W0912 06:04:35.198473 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.198481 kubelet[2758]: E0912 06:04:35.198481 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.198765 kubelet[2758]: E0912 06:04:35.198745 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.198765 kubelet[2758]: W0912 06:04:35.198758 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.198765 kubelet[2758]: E0912 06:04:35.198768 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.198974 kubelet[2758]: E0912 06:04:35.198958 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.198974 kubelet[2758]: W0912 06:04:35.198967 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.199030 kubelet[2758]: E0912 06:04:35.198976 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.199403 kubelet[2758]: E0912 06:04:35.199358 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.199403 kubelet[2758]: W0912 06:04:35.199374 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.199403 kubelet[2758]: E0912 06:04:35.199384 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:35.206816 kubelet[2758]: E0912 06:04:35.206795 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:35.206816 kubelet[2758]: W0912 06:04:35.206808 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:35.206816 kubelet[2758]: E0912 06:04:35.206818 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:36.236535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870649100.mount: Deactivated successfully. Sep 12 06:04:36.470783 kubelet[2758]: E0912 06:04:36.470737 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jggbx" podUID="f79f1c00-3e42-40ed-bb81-e1034be446c4" Sep 12 06:04:36.598893 containerd[1602]: time="2025-09-12T06:04:36.598762724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:36.599795 containerd[1602]: time="2025-09-12T06:04:36.599764866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 06:04:36.601268 containerd[1602]: time="2025-09-12T06:04:36.601222769Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:36.603531 containerd[1602]: time="2025-09-12T06:04:36.603499328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:36.604547 containerd[1602]: time="2025-09-12T06:04:36.604078943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.896014634s" Sep 12 06:04:36.604547 containerd[1602]: time="2025-09-12T06:04:36.604110762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 06:04:36.605018 containerd[1602]: time="2025-09-12T06:04:36.604967350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 06:04:36.618497 containerd[1602]: time="2025-09-12T06:04:36.618443207Z" level=info msg="CreateContainer within sandbox \"5fe433377cd7d96bbd006ec997ebdffae994927c250da54a529b58259d44b40d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 06:04:36.625808 containerd[1602]: time="2025-09-12T06:04:36.625760403Z" level=info msg="Container 03e954ca958a829aad137a3799778b899873c6b14803cd8740398e2e4fee9fa5: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:36.634652 containerd[1602]: time="2025-09-12T06:04:36.634596288Z" level=info msg="CreateContainer within sandbox \"5fe433377cd7d96bbd006ec997ebdffae994927c250da54a529b58259d44b40d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"03e954ca958a829aad137a3799778b899873c6b14803cd8740398e2e4fee9fa5\"" Sep 12 06:04:36.635177 containerd[1602]: time="2025-09-12T06:04:36.635148330Z" level=info msg="StartContainer for \"03e954ca958a829aad137a3799778b899873c6b14803cd8740398e2e4fee9fa5\"" Sep 12 06:04:36.636158 containerd[1602]: time="2025-09-12T06:04:36.636129082Z" level=info msg="connecting to shim 03e954ca958a829aad137a3799778b899873c6b14803cd8740398e2e4fee9fa5" address="unix:///run/containerd/s/8b0ed57581332b6123e972ee1760a6f695ec1f24094ad49043e4e2d80c451ac4" protocol=ttrpc version=3 Sep 12 06:04:36.661807 systemd[1]: Started cri-containerd-03e954ca958a829aad137a3799778b899873c6b14803cd8740398e2e4fee9fa5.scope - libcontainer container 03e954ca958a829aad137a3799778b899873c6b14803cd8740398e2e4fee9fa5. Sep 12 06:04:36.908016 containerd[1602]: time="2025-09-12T06:04:36.907833938Z" level=info msg="StartContainer for \"03e954ca958a829aad137a3799778b899873c6b14803cd8740398e2e4fee9fa5\" returns successfully" Sep 12 06:04:37.548285 kubelet[2758]: I0912 06:04:37.548207 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-847cb5c78-mlc22" podStartSLOduration=1.651183224 podStartE2EDuration="3.548191272s" podCreationTimestamp="2025-09-12 06:04:34 +0000 UTC" firstStartedPulling="2025-09-12 06:04:34.707838834 +0000 UTC m=+17.349006551" lastFinishedPulling="2025-09-12 06:04:36.604846882 +0000 UTC m=+19.246014599" observedRunningTime="2025-09-12 06:04:37.547828097 +0000 UTC m=+20.188995814" watchObservedRunningTime="2025-09-12 06:04:37.548191272 +0000 UTC m=+20.189358989" Sep 12 06:04:37.593894 kubelet[2758]: E0912 06:04:37.593857 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.593894 kubelet[2758]: W0912 06:04:37.593877 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.593894 kubelet[2758]: E0912 06:04:37.593897 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.594138 kubelet[2758]: E0912 06:04:37.594112 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.594138 kubelet[2758]: W0912 06:04:37.594123 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.594138 kubelet[2758]: E0912 06:04:37.594132 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.594339 kubelet[2758]: E0912 06:04:37.594314 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.594339 kubelet[2758]: W0912 06:04:37.594326 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.594339 kubelet[2758]: E0912 06:04:37.594333 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.594577 kubelet[2758]: E0912 06:04:37.594542 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.594577 kubelet[2758]: W0912 06:04:37.594562 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.594577 kubelet[2758]: E0912 06:04:37.594570 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.594807 kubelet[2758]: E0912 06:04:37.594789 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.594807 kubelet[2758]: W0912 06:04:37.594801 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.594858 kubelet[2758]: E0912 06:04:37.594810 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.595020 kubelet[2758]: E0912 06:04:37.595001 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.595020 kubelet[2758]: W0912 06:04:37.595012 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.595069 kubelet[2758]: E0912 06:04:37.595021 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.595215 kubelet[2758]: E0912 06:04:37.595196 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.595215 kubelet[2758]: W0912 06:04:37.595206 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.595215 kubelet[2758]: E0912 06:04:37.595214 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.595389 kubelet[2758]: E0912 06:04:37.595371 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.595389 kubelet[2758]: W0912 06:04:37.595381 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.595389 kubelet[2758]: E0912 06:04:37.595388 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.595583 kubelet[2758]: E0912 06:04:37.595563 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.595583 kubelet[2758]: W0912 06:04:37.595574 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.595583 kubelet[2758]: E0912 06:04:37.595581 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.595789 kubelet[2758]: E0912 06:04:37.595771 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.595789 kubelet[2758]: W0912 06:04:37.595782 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.595789 kubelet[2758]: E0912 06:04:37.595790 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.595994 kubelet[2758]: E0912 06:04:37.595958 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.595994 kubelet[2758]: W0912 06:04:37.595969 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.595994 kubelet[2758]: E0912 06:04:37.595977 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.596172 kubelet[2758]: E0912 06:04:37.596150 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.596172 kubelet[2758]: W0912 06:04:37.596161 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.596172 kubelet[2758]: E0912 06:04:37.596169 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.596360 kubelet[2758]: E0912 06:04:37.596341 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.596360 kubelet[2758]: W0912 06:04:37.596352 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.596408 kubelet[2758]: E0912 06:04:37.596363 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.596535 kubelet[2758]: E0912 06:04:37.596516 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.596535 kubelet[2758]: W0912 06:04:37.596527 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.596535 kubelet[2758]: E0912 06:04:37.596534 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.596739 kubelet[2758]: E0912 06:04:37.596720 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.596739 kubelet[2758]: W0912 06:04:37.596730 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.596739 kubelet[2758]: E0912 06:04:37.596738 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.614170 kubelet[2758]: E0912 06:04:37.614121 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.614170 kubelet[2758]: W0912 06:04:37.614141 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.614170 kubelet[2758]: E0912 06:04:37.614163 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.614419 kubelet[2758]: E0912 06:04:37.614396 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.614419 kubelet[2758]: W0912 06:04:37.614408 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.614419 kubelet[2758]: E0912 06:04:37.614417 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.614649 kubelet[2758]: E0912 06:04:37.614616 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.614649 kubelet[2758]: W0912 06:04:37.614628 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.614649 kubelet[2758]: E0912 06:04:37.614647 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.614907 kubelet[2758]: E0912 06:04:37.614883 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.614907 kubelet[2758]: W0912 06:04:37.614900 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.614962 kubelet[2758]: E0912 06:04:37.614912 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.615120 kubelet[2758]: E0912 06:04:37.615103 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.615120 kubelet[2758]: W0912 06:04:37.615113 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.615173 kubelet[2758]: E0912 06:04:37.615122 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.615319 kubelet[2758]: E0912 06:04:37.615299 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.615319 kubelet[2758]: W0912 06:04:37.615310 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.615319 kubelet[2758]: E0912 06:04:37.615318 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.615536 kubelet[2758]: E0912 06:04:37.615520 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.615536 kubelet[2758]: W0912 06:04:37.615530 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.615609 kubelet[2758]: E0912 06:04:37.615539 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.616021 kubelet[2758]: E0912 06:04:37.615987 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.616062 kubelet[2758]: W0912 06:04:37.616020 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.616062 kubelet[2758]: E0912 06:04:37.616047 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.616286 kubelet[2758]: E0912 06:04:37.616268 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.616286 kubelet[2758]: W0912 06:04:37.616282 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.616354 kubelet[2758]: E0912 06:04:37.616293 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.616530 kubelet[2758]: E0912 06:04:37.616512 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.616530 kubelet[2758]: W0912 06:04:37.616526 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.616594 kubelet[2758]: E0912 06:04:37.616537 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.616794 kubelet[2758]: E0912 06:04:37.616777 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.616794 kubelet[2758]: W0912 06:04:37.616791 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.616850 kubelet[2758]: E0912 06:04:37.616801 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.617042 kubelet[2758]: E0912 06:04:37.617025 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.617042 kubelet[2758]: W0912 06:04:37.617039 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.617092 kubelet[2758]: E0912 06:04:37.617049 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.617325 kubelet[2758]: E0912 06:04:37.617294 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.617325 kubelet[2758]: W0912 06:04:37.617307 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.617325 kubelet[2758]: E0912 06:04:37.617315 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.617771 kubelet[2758]: E0912 06:04:37.617734 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.617771 kubelet[2758]: W0912 06:04:37.617762 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.617837 kubelet[2758]: E0912 06:04:37.617788 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.618018 kubelet[2758]: E0912 06:04:37.618000 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.618018 kubelet[2758]: W0912 06:04:37.618014 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.618082 kubelet[2758]: E0912 06:04:37.618024 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.618310 kubelet[2758]: E0912 06:04:37.618291 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.618310 kubelet[2758]: W0912 06:04:37.618306 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.618362 kubelet[2758]: E0912 06:04:37.618317 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.618648 kubelet[2758]: E0912 06:04:37.618617 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.618648 kubelet[2758]: W0912 06:04:37.618646 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.618717 kubelet[2758]: E0912 06:04:37.618657 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:37.618909 kubelet[2758]: E0912 06:04:37.618892 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:37.618909 kubelet[2758]: W0912 06:04:37.618904 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:37.618972 kubelet[2758]: E0912 06:04:37.618913 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.470866 kubelet[2758]: E0912 06:04:38.470798 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jggbx" podUID="f79f1c00-3e42-40ed-bb81-e1034be446c4" Sep 12 06:04:38.522558 containerd[1602]: time="2025-09-12T06:04:38.522478081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:38.523767 containerd[1602]: time="2025-09-12T06:04:38.523727178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 06:04:38.525160 containerd[1602]: time="2025-09-12T06:04:38.525088145Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:38.527200 containerd[1602]: time="2025-09-12T06:04:38.527143172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:38.529669 containerd[1602]: time="2025-09-12T06:04:38.528131908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.923113932s" Sep 12 06:04:38.529669 containerd[1602]: time="2025-09-12T06:04:38.528189787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 06:04:38.536362 containerd[1602]: time="2025-09-12T06:04:38.536312132Z" level=info msg="CreateContainer within sandbox \"1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 06:04:38.538428 kubelet[2758]: I0912 06:04:38.538384 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 06:04:38.546039 containerd[1602]: time="2025-09-12T06:04:38.545989471Z" level=info msg="Container ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:38.554146 containerd[1602]: time="2025-09-12T06:04:38.554091067Z" level=info msg="CreateContainer within sandbox \"1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221\"" Sep 12 06:04:38.554736 containerd[1602]: time="2025-09-12T06:04:38.554688464Z" level=info msg="StartContainer for \"ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221\"" Sep 12 06:04:38.556250 containerd[1602]: time="2025-09-12T06:04:38.556209063Z" level=info msg="connecting to shim ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221" address="unix:///run/containerd/s/0c58d70909f3d9761095e3107990d9ea84ea242128efe52767af6fa6654e625a" protocol=ttrpc version=3 Sep 12 06:04:38.584999 systemd[1]: Started cri-containerd-ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221.scope - libcontainer container ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221. Sep 12 06:04:38.603091 kubelet[2758]: E0912 06:04:38.603040 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.603091 kubelet[2758]: W0912 06:04:38.603066 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.603618 kubelet[2758]: E0912 06:04:38.603091 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.603618 kubelet[2758]: E0912 06:04:38.603340 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.603618 kubelet[2758]: W0912 06:04:38.603372 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.603618 kubelet[2758]: E0912 06:04:38.603384 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.603773 kubelet[2758]: E0912 06:04:38.603622 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.603773 kubelet[2758]: W0912 06:04:38.603646 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.603773 kubelet[2758]: E0912 06:04:38.603655 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.603986 kubelet[2758]: E0912 06:04:38.603966 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.603986 kubelet[2758]: W0912 06:04:38.603980 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.604057 kubelet[2758]: E0912 06:04:38.603991 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.604249 kubelet[2758]: E0912 06:04:38.604230 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.604249 kubelet[2758]: W0912 06:04:38.604243 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.604330 kubelet[2758]: E0912 06:04:38.604254 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.604512 kubelet[2758]: E0912 06:04:38.604496 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.604512 kubelet[2758]: W0912 06:04:38.604507 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.604592 kubelet[2758]: E0912 06:04:38.604515 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.604766 kubelet[2758]: E0912 06:04:38.604750 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.604766 kubelet[2758]: W0912 06:04:38.604762 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.604834 kubelet[2758]: E0912 06:04:38.604795 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.605041 kubelet[2758]: E0912 06:04:38.605004 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.605041 kubelet[2758]: W0912 06:04:38.605025 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.605041 kubelet[2758]: E0912 06:04:38.605033 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.605353 kubelet[2758]: E0912 06:04:38.605338 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.605593 kubelet[2758]: W0912 06:04:38.605558 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.605667 kubelet[2758]: E0912 06:04:38.605593 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.605854 kubelet[2758]: E0912 06:04:38.605838 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.605854 kubelet[2758]: W0912 06:04:38.605850 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.605946 kubelet[2758]: E0912 06:04:38.605859 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.606055 kubelet[2758]: E0912 06:04:38.606040 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.606055 kubelet[2758]: W0912 06:04:38.606049 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.606123 kubelet[2758]: E0912 06:04:38.606057 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.606269 kubelet[2758]: E0912 06:04:38.606255 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.606269 kubelet[2758]: W0912 06:04:38.606265 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.606322 kubelet[2758]: E0912 06:04:38.606273 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.606470 kubelet[2758]: E0912 06:04:38.606458 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.606470 kubelet[2758]: W0912 06:04:38.606467 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.606517 kubelet[2758]: E0912 06:04:38.606475 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.606701 kubelet[2758]: E0912 06:04:38.606671 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.606701 kubelet[2758]: W0912 06:04:38.606696 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.606770 kubelet[2758]: E0912 06:04:38.606705 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.606930 kubelet[2758]: E0912 06:04:38.606917 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.606930 kubelet[2758]: W0912 06:04:38.606927 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.606978 kubelet[2758]: E0912 06:04:38.606937 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.622988 kubelet[2758]: E0912 06:04:38.622953 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.622988 kubelet[2758]: W0912 06:04:38.622974 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.622988 kubelet[2758]: E0912 06:04:38.622997 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.623247 kubelet[2758]: E0912 06:04:38.623230 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.623247 kubelet[2758]: W0912 06:04:38.623242 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.623304 kubelet[2758]: E0912 06:04:38.623251 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.623478 kubelet[2758]: E0912 06:04:38.623458 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.623478 kubelet[2758]: W0912 06:04:38.623471 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.623478 kubelet[2758]: E0912 06:04:38.623480 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.623721 kubelet[2758]: E0912 06:04:38.623704 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.623721 kubelet[2758]: W0912 06:04:38.623716 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.623801 kubelet[2758]: E0912 06:04:38.623725 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.623954 kubelet[2758]: E0912 06:04:38.623930 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.623954 kubelet[2758]: W0912 06:04:38.623948 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.623954 kubelet[2758]: E0912 06:04:38.623957 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.624229 kubelet[2758]: E0912 06:04:38.624141 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.624229 kubelet[2758]: W0912 06:04:38.624164 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.624229 kubelet[2758]: E0912 06:04:38.624174 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.624561 kubelet[2758]: E0912 06:04:38.624531 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.624561 kubelet[2758]: W0912 06:04:38.624556 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.624561 kubelet[2758]: E0912 06:04:38.624567 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.625727 kubelet[2758]: E0912 06:04:38.625688 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.625727 kubelet[2758]: W0912 06:04:38.625703 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.625727 kubelet[2758]: E0912 06:04:38.625715 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.625981 kubelet[2758]: E0912 06:04:38.625965 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.625981 kubelet[2758]: W0912 06:04:38.625976 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.626069 kubelet[2758]: E0912 06:04:38.625985 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.626195 kubelet[2758]: E0912 06:04:38.626177 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.626195 kubelet[2758]: W0912 06:04:38.626191 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.626291 kubelet[2758]: E0912 06:04:38.626201 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.626374 kubelet[2758]: E0912 06:04:38.626358 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.626374 kubelet[2758]: W0912 06:04:38.626369 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.626463 kubelet[2758]: E0912 06:04:38.626377 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.626589 kubelet[2758]: E0912 06:04:38.626559 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.626589 kubelet[2758]: W0912 06:04:38.626570 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.626589 kubelet[2758]: E0912 06:04:38.626580 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.626979 kubelet[2758]: E0912 06:04:38.626939 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.626979 kubelet[2758]: W0912 06:04:38.626964 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.626979 kubelet[2758]: E0912 06:04:38.626989 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.627817 kubelet[2758]: E0912 06:04:38.627782 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.627817 kubelet[2758]: W0912 06:04:38.627805 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.627891 kubelet[2758]: E0912 06:04:38.627830 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.628114 kubelet[2758]: E0912 06:04:38.628097 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.628114 kubelet[2758]: W0912 06:04:38.628109 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.628179 kubelet[2758]: E0912 06:04:38.628119 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.628307 kubelet[2758]: E0912 06:04:38.628292 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.628307 kubelet[2758]: W0912 06:04:38.628303 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.628357 kubelet[2758]: E0912 06:04:38.628311 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.628598 kubelet[2758]: E0912 06:04:38.628573 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.628598 kubelet[2758]: W0912 06:04:38.628586 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.628598 kubelet[2758]: E0912 06:04:38.628594 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.629074 kubelet[2758]: E0912 06:04:38.629047 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 06:04:38.629074 kubelet[2758]: W0912 06:04:38.629059 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 06:04:38.629074 kubelet[2758]: E0912 06:04:38.629068 2758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 06:04:38.641298 containerd[1602]: time="2025-09-12T06:04:38.641251287Z" level=info msg="StartContainer for \"ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221\" returns successfully" Sep 12 06:04:38.654864 systemd[1]: cri-containerd-ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221.scope: Deactivated successfully. Sep 12 06:04:38.658452 containerd[1602]: time="2025-09-12T06:04:38.658393761Z" level=info msg="received exit event container_id:\"ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221\" id:\"ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221\" pid:3464 exited_at:{seconds:1757657078 nanos:657980912}" Sep 12 06:04:38.658555 containerd[1602]: time="2025-09-12T06:04:38.658473822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221\" id:\"ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221\" pid:3464 exited_at:{seconds:1757657078 nanos:657980912}" Sep 12 06:04:38.685489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab1908b1952adfa96b347e20648c310f79fd7bdc015383ff7959f287ebeba221-rootfs.mount: Deactivated successfully. Sep 12 06:04:39.543910 containerd[1602]: time="2025-09-12T06:04:39.543848406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 06:04:40.470954 kubelet[2758]: E0912 06:04:40.470895 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jggbx" podUID="f79f1c00-3e42-40ed-bb81-e1034be446c4" Sep 12 06:04:42.471441 kubelet[2758]: E0912 06:04:42.471387 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jggbx" podUID="f79f1c00-3e42-40ed-bb81-e1034be446c4" Sep 12 06:04:43.125958 containerd[1602]: time="2025-09-12T06:04:43.125899906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:43.126796 containerd[1602]: time="2025-09-12T06:04:43.126747252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 06:04:43.128090 containerd[1602]: time="2025-09-12T06:04:43.128052911Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:43.132649 containerd[1602]: time="2025-09-12T06:04:43.131149795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:43.132649 containerd[1602]: time="2025-09-12T06:04:43.132070730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.588178602s" Sep 12 06:04:43.132649 containerd[1602]: time="2025-09-12T06:04:43.132093433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 06:04:43.138804 containerd[1602]: time="2025-09-12T06:04:43.138751965Z" level=info msg="CreateContainer within sandbox \"1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 06:04:43.147776 containerd[1602]: time="2025-09-12T06:04:43.147731149Z" level=info msg="Container 2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:43.158173 containerd[1602]: time="2025-09-12T06:04:43.158142863Z" level=info msg="CreateContainer within sandbox \"1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8\"" Sep 12 06:04:43.158625 containerd[1602]: time="2025-09-12T06:04:43.158593090Z" level=info msg="StartContainer for \"2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8\"" Sep 12 06:04:43.160003 containerd[1602]: time="2025-09-12T06:04:43.159960706Z" level=info msg="connecting to shim 2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8" address="unix:///run/containerd/s/0c58d70909f3d9761095e3107990d9ea84ea242128efe52767af6fa6654e625a" protocol=ttrpc version=3 Sep 12 06:04:43.189293 systemd[1]: Started cri-containerd-2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8.scope - libcontainer container 2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8. Sep 12 06:04:43.234002 containerd[1602]: time="2025-09-12T06:04:43.233956403Z" level=info msg="StartContainer for \"2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8\" returns successfully" Sep 12 06:04:44.471605 kubelet[2758]: E0912 06:04:44.470929 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jggbx" podUID="f79f1c00-3e42-40ed-bb81-e1034be446c4" Sep 12 06:04:44.590508 systemd[1]: cri-containerd-2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8.scope: Deactivated successfully. Sep 12 06:04:44.590903 systemd[1]: cri-containerd-2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8.scope: Consumed 610ms CPU time, 177.2M memory peak, 4.4M read from disk, 171.3M written to disk. Sep 12 06:04:44.593344 containerd[1602]: time="2025-09-12T06:04:44.593304009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8\" id:\"2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8\" pid:3561 exited_at:{seconds:1757657084 nanos:592978836}" Sep 12 06:04:44.594050 containerd[1602]: time="2025-09-12T06:04:44.593850307Z" level=info msg="received exit event container_id:\"2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8\" id:\"2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8\" pid:3561 exited_at:{seconds:1757657084 nanos:592978836}" Sep 12 06:04:44.622230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f54855a49e8ccbe6a1ec6c40c86a38cd2674ec11c573d524a2e7128995c05d8-rootfs.mount: Deactivated successfully. Sep 12 06:04:44.638254 kubelet[2758]: I0912 06:04:44.637710 2758 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 06:04:44.861843 systemd[1]: Created slice kubepods-burstable-pod1060453b_4e80_449a_b65f_58e2fab02113.slice - libcontainer container kubepods-burstable-pod1060453b_4e80_449a_b65f_58e2fab02113.slice. Sep 12 06:04:44.864966 kubelet[2758]: I0912 06:04:44.864736 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7lqq\" (UniqueName: \"kubernetes.io/projected/1060453b-4e80-449a-b65f-58e2fab02113-kube-api-access-w7lqq\") pod \"coredns-674b8bbfcf-dsqq8\" (UID: \"1060453b-4e80-449a-b65f-58e2fab02113\") " pod="kube-system/coredns-674b8bbfcf-dsqq8" Sep 12 06:04:44.864966 kubelet[2758]: I0912 06:04:44.864777 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c6c3a0d-475d-4673-9f60-910cc63aa4d9-calico-apiserver-certs\") pod \"calico-apiserver-84959cb5d5-7f7ms\" (UID: \"9c6c3a0d-475d-4673-9f60-910cc63aa4d9\") " pod="calico-apiserver/calico-apiserver-84959cb5d5-7f7ms" Sep 12 06:04:44.864966 kubelet[2758]: I0912 06:04:44.864793 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfv8\" (UniqueName: \"kubernetes.io/projected/9c6c3a0d-475d-4673-9f60-910cc63aa4d9-kube-api-access-ntfv8\") pod \"calico-apiserver-84959cb5d5-7f7ms\" (UID: \"9c6c3a0d-475d-4673-9f60-910cc63aa4d9\") " pod="calico-apiserver/calico-apiserver-84959cb5d5-7f7ms" Sep 12 06:04:44.864966 kubelet[2758]: I0912 06:04:44.864807 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1060453b-4e80-449a-b65f-58e2fab02113-config-volume\") pod \"coredns-674b8bbfcf-dsqq8\" (UID: \"1060453b-4e80-449a-b65f-58e2fab02113\") " pod="kube-system/coredns-674b8bbfcf-dsqq8" Sep 12 06:04:44.871934 systemd[1]: Created slice kubepods-besteffort-pod9c6c3a0d_475d_4673_9f60_910cc63aa4d9.slice - libcontainer container kubepods-besteffort-pod9c6c3a0d_475d_4673_9f60_910cc63aa4d9.slice. Sep 12 06:04:44.878773 systemd[1]: Created slice kubepods-besteffort-podf725c81c_f9bd_4915_ab55_8f8f38b50c43.slice - libcontainer container kubepods-besteffort-podf725c81c_f9bd_4915_ab55_8f8f38b50c43.slice. Sep 12 06:04:44.884523 systemd[1]: Created slice kubepods-burstable-pod565b8e82_48fc_4998_a083_ad30ae1f437e.slice - libcontainer container kubepods-burstable-pod565b8e82_48fc_4998_a083_ad30ae1f437e.slice. Sep 12 06:04:44.891792 systemd[1]: Created slice kubepods-besteffort-pod2b197ab3_c2aa_4a25_a200_7c47de4ee662.slice - libcontainer container kubepods-besteffort-pod2b197ab3_c2aa_4a25_a200_7c47de4ee662.slice. Sep 12 06:04:44.898896 systemd[1]: Created slice kubepods-besteffort-pod6f47717f_d4cf_4f49_acac_5116f5479f26.slice - libcontainer container kubepods-besteffort-pod6f47717f_d4cf_4f49_acac_5116f5479f26.slice. Sep 12 06:04:44.903219 systemd[1]: Created slice kubepods-besteffort-pod5306b6c3_b830_4bb1_afab_d71396b947ec.slice - libcontainer container kubepods-besteffort-pod5306b6c3_b830_4bb1_afab_d71396b947ec.slice. Sep 12 06:04:44.944333 containerd[1602]: time="2025-09-12T06:04:44.944285953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 06:04:44.965673 kubelet[2758]: I0912 06:04:44.965573 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b197ab3-c2aa-4a25-a200-7c47de4ee662-tigera-ca-bundle\") pod \"calico-kube-controllers-6b5c45568f-dl5wm\" (UID: \"2b197ab3-c2aa-4a25-a200-7c47de4ee662\") " pod="calico-system/calico-kube-controllers-6b5c45568f-dl5wm" Sep 12 06:04:44.965673 kubelet[2758]: I0912 06:04:44.965623 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5306b6c3-b830-4bb1-afab-d71396b947ec-goldmane-key-pair\") pod \"goldmane-54d579b49d-5fzv6\" (UID: \"5306b6c3-b830-4bb1-afab-d71396b947ec\") " pod="calico-system/goldmane-54d579b49d-5fzv6" Sep 12 06:04:44.965884 kubelet[2758]: I0912 06:04:44.965815 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpbd\" (UniqueName: \"kubernetes.io/projected/565b8e82-48fc-4998-a083-ad30ae1f437e-kube-api-access-2kpbd\") pod \"coredns-674b8bbfcf-m2pzx\" (UID: \"565b8e82-48fc-4998-a083-ad30ae1f437e\") " pod="kube-system/coredns-674b8bbfcf-m2pzx" Sep 12 06:04:44.965884 kubelet[2758]: I0912 06:04:44.965847 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7lx\" (UniqueName: \"kubernetes.io/projected/2b197ab3-c2aa-4a25-a200-7c47de4ee662-kube-api-access-7s7lx\") pod \"calico-kube-controllers-6b5c45568f-dl5wm\" (UID: \"2b197ab3-c2aa-4a25-a200-7c47de4ee662\") " pod="calico-system/calico-kube-controllers-6b5c45568f-dl5wm" Sep 12 06:04:44.965884 kubelet[2758]: I0912 06:04:44.965865 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f725c81c-f9bd-4915-ab55-8f8f38b50c43-calico-apiserver-certs\") pod \"calico-apiserver-84959cb5d5-xxzxr\" (UID: \"f725c81c-f9bd-4915-ab55-8f8f38b50c43\") " pod="calico-apiserver/calico-apiserver-84959cb5d5-xxzxr" Sep 12 06:04:44.965884 kubelet[2758]: I0912 06:04:44.965882 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f47717f-d4cf-4f49-acac-5116f5479f26-whisker-backend-key-pair\") pod \"whisker-5785df657-sqt28\" (UID: \"6f47717f-d4cf-4f49-acac-5116f5479f26\") " pod="calico-system/whisker-5785df657-sqt28" Sep 12 06:04:44.965993 kubelet[2758]: I0912 06:04:44.965903 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/565b8e82-48fc-4998-a083-ad30ae1f437e-config-volume\") pod \"coredns-674b8bbfcf-m2pzx\" (UID: \"565b8e82-48fc-4998-a083-ad30ae1f437e\") " pod="kube-system/coredns-674b8bbfcf-m2pzx" Sep 12 06:04:44.965993 kubelet[2758]: I0912 06:04:44.965920 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5306b6c3-b830-4bb1-afab-d71396b947ec-config\") pod \"goldmane-54d579b49d-5fzv6\" (UID: \"5306b6c3-b830-4bb1-afab-d71396b947ec\") " pod="calico-system/goldmane-54d579b49d-5fzv6" Sep 12 06:04:44.965993 kubelet[2758]: I0912 06:04:44.965933 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5306b6c3-b830-4bb1-afab-d71396b947ec-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-5fzv6\" (UID: \"5306b6c3-b830-4bb1-afab-d71396b947ec\") " pod="calico-system/goldmane-54d579b49d-5fzv6" Sep 12 06:04:44.965993 kubelet[2758]: I0912 06:04:44.965946 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29fbf\" (UniqueName: \"kubernetes.io/projected/5306b6c3-b830-4bb1-afab-d71396b947ec-kube-api-access-29fbf\") pod \"goldmane-54d579b49d-5fzv6\" (UID: \"5306b6c3-b830-4bb1-afab-d71396b947ec\") " pod="calico-system/goldmane-54d579b49d-5fzv6" Sep 12 06:04:44.966170 kubelet[2758]: I0912 06:04:44.966103 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68w6\" (UniqueName: \"kubernetes.io/projected/f725c81c-f9bd-4915-ab55-8f8f38b50c43-kube-api-access-p68w6\") pod \"calico-apiserver-84959cb5d5-xxzxr\" (UID: \"f725c81c-f9bd-4915-ab55-8f8f38b50c43\") " pod="calico-apiserver/calico-apiserver-84959cb5d5-xxzxr" Sep 12 06:04:44.966170 kubelet[2758]: I0912 06:04:44.966155 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f47717f-d4cf-4f49-acac-5116f5479f26-whisker-ca-bundle\") pod \"whisker-5785df657-sqt28\" (UID: \"6f47717f-d4cf-4f49-acac-5116f5479f26\") " pod="calico-system/whisker-5785df657-sqt28" Sep 12 06:04:44.966809 kubelet[2758]: I0912 06:04:44.966173 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88bg\" (UniqueName: \"kubernetes.io/projected/6f47717f-d4cf-4f49-acac-5116f5479f26-kube-api-access-g88bg\") pod \"whisker-5785df657-sqt28\" (UID: \"6f47717f-d4cf-4f49-acac-5116f5479f26\") " pod="calico-system/whisker-5785df657-sqt28" Sep 12 06:04:45.165999 kubelet[2758]: E0912 06:04:45.165959 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:45.166513 containerd[1602]: time="2025-09-12T06:04:45.166471407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dsqq8,Uid:1060453b-4e80-449a-b65f-58e2fab02113,Namespace:kube-system,Attempt:0,}" Sep 12 06:04:45.177060 containerd[1602]: time="2025-09-12T06:04:45.177007323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84959cb5d5-7f7ms,Uid:9c6c3a0d-475d-4673-9f60-910cc63aa4d9,Namespace:calico-apiserver,Attempt:0,}" Sep 12 06:04:45.182676 containerd[1602]: time="2025-09-12T06:04:45.182624910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84959cb5d5-xxzxr,Uid:f725c81c-f9bd-4915-ab55-8f8f38b50c43,Namespace:calico-apiserver,Attempt:0,}" Sep 12 06:04:45.189352 kubelet[2758]: E0912 06:04:45.189320 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:45.191028 containerd[1602]: time="2025-09-12T06:04:45.190810431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m2pzx,Uid:565b8e82-48fc-4998-a083-ad30ae1f437e,Namespace:kube-system,Attempt:0,}" Sep 12 06:04:45.195267 containerd[1602]: time="2025-09-12T06:04:45.195242877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5c45568f-dl5wm,Uid:2b197ab3-c2aa-4a25-a200-7c47de4ee662,Namespace:calico-system,Attempt:0,}" Sep 12 06:04:45.204110 containerd[1602]: time="2025-09-12T06:04:45.204066549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5785df657-sqt28,Uid:6f47717f-d4cf-4f49-acac-5116f5479f26,Namespace:calico-system,Attempt:0,}" Sep 12 06:04:45.222823 containerd[1602]: time="2025-09-12T06:04:45.222796623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5fzv6,Uid:5306b6c3-b830-4bb1-afab-d71396b947ec,Namespace:calico-system,Attempt:0,}" Sep 12 06:04:45.305329 containerd[1602]: time="2025-09-12T06:04:45.305234392Z" level=error msg="Failed to destroy network for sandbox \"aa91a77a729a8ab7d3978fa050c44b030a9ebc90e7ed99b10d0ecf321f0c6f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.307952 containerd[1602]: time="2025-09-12T06:04:45.307911432Z" level=error msg="Failed to destroy network for sandbox \"9c4f96574c4c490b8ead2e149670be616bd0f6e924ff1cba046ec3e88ee6ca04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.310165 containerd[1602]: time="2025-09-12T06:04:45.310136963Z" level=error msg="Failed to destroy network for sandbox \"567c5908e231c5d003e157d35ea027c02e0a2aa3f45fd9c9e446b302a174096e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.316709 containerd[1602]: time="2025-09-12T06:04:45.316664082Z" level=error msg="Failed to destroy network for sandbox \"e4d24f76f9bfd6755b10424adebbdae151eecce5360f17e4c705d484a15a4a10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.327877 containerd[1602]: time="2025-09-12T06:04:45.327834583Z" level=error msg="Failed to destroy network for sandbox \"a65b078dd9721ea9965384375d2c3fb3b65df2a69234657868eec05f4449338f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.397769 containerd[1602]: time="2025-09-12T06:04:45.397687046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dsqq8,Uid:1060453b-4e80-449a-b65f-58e2fab02113,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa91a77a729a8ab7d3978fa050c44b030a9ebc90e7ed99b10d0ecf321f0c6f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.398985 containerd[1602]: time="2025-09-12T06:04:45.398939244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m2pzx,Uid:565b8e82-48fc-4998-a083-ad30ae1f437e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c4f96574c4c490b8ead2e149670be616bd0f6e924ff1cba046ec3e88ee6ca04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.401731 containerd[1602]: time="2025-09-12T06:04:45.401692108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84959cb5d5-7f7ms,Uid:9c6c3a0d-475d-4673-9f60-910cc63aa4d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"567c5908e231c5d003e157d35ea027c02e0a2aa3f45fd9c9e446b302a174096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.402744 containerd[1602]: time="2025-09-12T06:04:45.402714092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84959cb5d5-xxzxr,Uid:f725c81c-f9bd-4915-ab55-8f8f38b50c43,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d24f76f9bfd6755b10424adebbdae151eecce5360f17e4c705d484a15a4a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.404504 containerd[1602]: time="2025-09-12T06:04:45.403717060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5785df657-sqt28,Uid:6f47717f-d4cf-4f49-acac-5116f5479f26,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65b078dd9721ea9965384375d2c3fb3b65df2a69234657868eec05f4449338f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.440586 kubelet[2758]: E0912 06:04:45.440011 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65b078dd9721ea9965384375d2c3fb3b65df2a69234657868eec05f4449338f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.440586 kubelet[2758]: E0912 06:04:45.440102 2758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65b078dd9721ea9965384375d2c3fb3b65df2a69234657868eec05f4449338f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5785df657-sqt28" Sep 12 06:04:45.440586 kubelet[2758]: E0912 06:04:45.440128 2758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65b078dd9721ea9965384375d2c3fb3b65df2a69234657868eec05f4449338f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5785df657-sqt28" Sep 12 06:04:45.440895 kubelet[2758]: E0912 06:04:45.440175 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5785df657-sqt28_calico-system(6f47717f-d4cf-4f49-acac-5116f5479f26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5785df657-sqt28_calico-system(6f47717f-d4cf-4f49-acac-5116f5479f26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a65b078dd9721ea9965384375d2c3fb3b65df2a69234657868eec05f4449338f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5785df657-sqt28" podUID="6f47717f-d4cf-4f49-acac-5116f5479f26" Sep 12 06:04:45.440895 kubelet[2758]: E0912 06:04:45.440453 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c4f96574c4c490b8ead2e149670be616bd0f6e924ff1cba046ec3e88ee6ca04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.440895 kubelet[2758]: E0912 06:04:45.440475 2758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c4f96574c4c490b8ead2e149670be616bd0f6e924ff1cba046ec3e88ee6ca04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m2pzx" Sep 12 06:04:45.441000 kubelet[2758]: E0912 06:04:45.440490 2758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c4f96574c4c490b8ead2e149670be616bd0f6e924ff1cba046ec3e88ee6ca04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m2pzx" Sep 12 06:04:45.441000 kubelet[2758]: E0912 06:04:45.440024 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa91a77a729a8ab7d3978fa050c44b030a9ebc90e7ed99b10d0ecf321f0c6f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.441000 kubelet[2758]: E0912 06:04:45.440521 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m2pzx_kube-system(565b8e82-48fc-4998-a083-ad30ae1f437e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m2pzx_kube-system(565b8e82-48fc-4998-a083-ad30ae1f437e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c4f96574c4c490b8ead2e149670be616bd0f6e924ff1cba046ec3e88ee6ca04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m2pzx" podUID="565b8e82-48fc-4998-a083-ad30ae1f437e" Sep 12 06:04:45.441000 kubelet[2758]: E0912 06:04:45.440545 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567c5908e231c5d003e157d35ea027c02e0a2aa3f45fd9c9e446b302a174096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.441113 kubelet[2758]: E0912 06:04:45.440561 2758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567c5908e231c5d003e157d35ea027c02e0a2aa3f45fd9c9e446b302a174096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84959cb5d5-7f7ms" Sep 12 06:04:45.441113 kubelet[2758]: E0912 06:04:45.440572 2758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567c5908e231c5d003e157d35ea027c02e0a2aa3f45fd9c9e446b302a174096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84959cb5d5-7f7ms" Sep 12 06:04:45.441113 kubelet[2758]: E0912 06:04:45.440595 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84959cb5d5-7f7ms_calico-apiserver(9c6c3a0d-475d-4673-9f60-910cc63aa4d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84959cb5d5-7f7ms_calico-apiserver(9c6c3a0d-475d-4673-9f60-910cc63aa4d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"567c5908e231c5d003e157d35ea027c02e0a2aa3f45fd9c9e446b302a174096e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84959cb5d5-7f7ms" podUID="9c6c3a0d-475d-4673-9f60-910cc63aa4d9" Sep 12 06:04:45.441198 kubelet[2758]: E0912 06:04:45.440621 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d24f76f9bfd6755b10424adebbdae151eecce5360f17e4c705d484a15a4a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.441198 kubelet[2758]: E0912 06:04:45.440665 2758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d24f76f9bfd6755b10424adebbdae151eecce5360f17e4c705d484a15a4a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84959cb5d5-xxzxr" Sep 12 06:04:45.441198 kubelet[2758]: E0912 06:04:45.440677 2758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d24f76f9bfd6755b10424adebbdae151eecce5360f17e4c705d484a15a4a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84959cb5d5-xxzxr" Sep 12 06:04:45.441198 kubelet[2758]: E0912 06:04:45.440560 2758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa91a77a729a8ab7d3978fa050c44b030a9ebc90e7ed99b10d0ecf321f0c6f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dsqq8" Sep 12 06:04:45.441300 kubelet[2758]: E0912 06:04:45.440727 2758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa91a77a729a8ab7d3978fa050c44b030a9ebc90e7ed99b10d0ecf321f0c6f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dsqq8" Sep 12 06:04:45.441300 kubelet[2758]: E0912 06:04:45.440700 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84959cb5d5-xxzxr_calico-apiserver(f725c81c-f9bd-4915-ab55-8f8f38b50c43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84959cb5d5-xxzxr_calico-apiserver(f725c81c-f9bd-4915-ab55-8f8f38b50c43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4d24f76f9bfd6755b10424adebbdae151eecce5360f17e4c705d484a15a4a10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84959cb5d5-xxzxr" podUID="f725c81c-f9bd-4915-ab55-8f8f38b50c43" Sep 12 06:04:45.441624 kubelet[2758]: E0912 06:04:45.441556 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dsqq8_kube-system(1060453b-4e80-449a-b65f-58e2fab02113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dsqq8_kube-system(1060453b-4e80-449a-b65f-58e2fab02113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa91a77a729a8ab7d3978fa050c44b030a9ebc90e7ed99b10d0ecf321f0c6f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dsqq8" podUID="1060453b-4e80-449a-b65f-58e2fab02113" Sep 12 06:04:45.457948 containerd[1602]: time="2025-09-12T06:04:45.457896236Z" level=error msg="Failed to destroy network for sandbox \"7e14bc00148f818b0436d969756bc2dd2ca86e9ab4626c5373407d1519c428f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.459190 containerd[1602]: time="2025-09-12T06:04:45.459146750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5c45568f-dl5wm,Uid:2b197ab3-c2aa-4a25-a200-7c47de4ee662,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e14bc00148f818b0436d969756bc2dd2ca86e9ab4626c5373407d1519c428f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.459460 kubelet[2758]: E0912 06:04:45.459394 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e14bc00148f818b0436d969756bc2dd2ca86e9ab4626c5373407d1519c428f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.459511 kubelet[2758]: E0912 06:04:45.459478 2758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e14bc00148f818b0436d969756bc2dd2ca86e9ab4626c5373407d1519c428f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b5c45568f-dl5wm" Sep 12 06:04:45.459540 kubelet[2758]: E0912 06:04:45.459523 2758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e14bc00148f818b0436d969756bc2dd2ca86e9ab4626c5373407d1519c428f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b5c45568f-dl5wm" Sep 12 06:04:45.459678 kubelet[2758]: E0912 06:04:45.459627 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b5c45568f-dl5wm_calico-system(2b197ab3-c2aa-4a25-a200-7c47de4ee662)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b5c45568f-dl5wm_calico-system(2b197ab3-c2aa-4a25-a200-7c47de4ee662)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e14bc00148f818b0436d969756bc2dd2ca86e9ab4626c5373407d1519c428f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b5c45568f-dl5wm" podUID="2b197ab3-c2aa-4a25-a200-7c47de4ee662" Sep 12 06:04:45.501553 containerd[1602]: time="2025-09-12T06:04:45.501489300Z" level=error msg="Failed to destroy network for sandbox \"221c6ae6397371842090783b350308cfb12b32ebf246dcb15827e66b3b951063\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.502758 containerd[1602]: time="2025-09-12T06:04:45.502722772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5fzv6,Uid:5306b6c3-b830-4bb1-afab-d71396b947ec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"221c6ae6397371842090783b350308cfb12b32ebf246dcb15827e66b3b951063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.502991 kubelet[2758]: E0912 06:04:45.502942 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221c6ae6397371842090783b350308cfb12b32ebf246dcb15827e66b3b951063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:45.503304 kubelet[2758]: E0912 06:04:45.503017 2758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221c6ae6397371842090783b350308cfb12b32ebf246dcb15827e66b3b951063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5fzv6" Sep 12 06:04:45.503304 kubelet[2758]: E0912 06:04:45.503041 2758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221c6ae6397371842090783b350308cfb12b32ebf246dcb15827e66b3b951063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5fzv6" Sep 12 06:04:45.503304 kubelet[2758]: E0912 06:04:45.503099 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-5fzv6_calico-system(5306b6c3-b830-4bb1-afab-d71396b947ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-5fzv6_calico-system(5306b6c3-b830-4bb1-afab-d71396b947ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"221c6ae6397371842090783b350308cfb12b32ebf246dcb15827e66b3b951063\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5fzv6" podUID="5306b6c3-b830-4bb1-afab-d71396b947ec" Sep 12 06:04:46.476991 systemd[1]: Created slice kubepods-besteffort-podf79f1c00_3e42_40ed_bb81_e1034be446c4.slice - libcontainer container kubepods-besteffort-podf79f1c00_3e42_40ed_bb81_e1034be446c4.slice. Sep 12 06:04:46.479369 containerd[1602]: time="2025-09-12T06:04:46.479332649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jggbx,Uid:f79f1c00-3e42-40ed-bb81-e1034be446c4,Namespace:calico-system,Attempt:0,}" Sep 12 06:04:46.529851 containerd[1602]: time="2025-09-12T06:04:46.529799634Z" level=error msg="Failed to destroy network for sandbox \"632c38d602219d5d42b1bef379a02d2d2bc27761e440ac6f4cdcffff62ba01ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:46.532163 systemd[1]: run-netns-cni\x2d38b3c478\x2dddd4\x2dc1ae\x2d5449\x2d5351c3e0f8c6.mount: Deactivated successfully. Sep 12 06:04:46.533115 containerd[1602]: time="2025-09-12T06:04:46.533049711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jggbx,Uid:f79f1c00-3e42-40ed-bb81-e1034be446c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"632c38d602219d5d42b1bef379a02d2d2bc27761e440ac6f4cdcffff62ba01ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:46.533315 kubelet[2758]: E0912 06:04:46.533272 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"632c38d602219d5d42b1bef379a02d2d2bc27761e440ac6f4cdcffff62ba01ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 06:04:46.533609 kubelet[2758]: E0912 06:04:46.533337 2758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"632c38d602219d5d42b1bef379a02d2d2bc27761e440ac6f4cdcffff62ba01ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jggbx" Sep 12 06:04:46.533609 kubelet[2758]: E0912 06:04:46.533359 2758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"632c38d602219d5d42b1bef379a02d2d2bc27761e440ac6f4cdcffff62ba01ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jggbx" Sep 12 06:04:46.533609 kubelet[2758]: E0912 06:04:46.533420 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jggbx_calico-system(f79f1c00-3e42-40ed-bb81-e1034be446c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jggbx_calico-system(f79f1c00-3e42-40ed-bb81-e1034be446c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"632c38d602219d5d42b1bef379a02d2d2bc27761e440ac6f4cdcffff62ba01ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jggbx" podUID="f79f1c00-3e42-40ed-bb81-e1034be446c4" Sep 12 06:04:51.137373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174632669.mount: Deactivated successfully. Sep 12 06:04:52.419945 containerd[1602]: time="2025-09-12T06:04:52.419875769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:52.420685 containerd[1602]: time="2025-09-12T06:04:52.420661907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 06:04:52.421917 containerd[1602]: time="2025-09-12T06:04:52.421879427Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:52.423682 containerd[1602]: time="2025-09-12T06:04:52.423623735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:52.424199 containerd[1602]: time="2025-09-12T06:04:52.424161076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.479840429s" Sep 12 06:04:52.424199 containerd[1602]: time="2025-09-12T06:04:52.424190531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 06:04:52.445233 containerd[1602]: time="2025-09-12T06:04:52.445194052Z" level=info msg="CreateContainer within sandbox \"1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 06:04:52.463792 containerd[1602]: time="2025-09-12T06:04:52.463740182Z" level=info msg="Container 161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:52.464834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391051828.mount: Deactivated successfully. Sep 12 06:04:52.474135 containerd[1602]: time="2025-09-12T06:04:52.474095212Z" level=info msg="CreateContainer within sandbox \"1b3b4c515f7b23b2343d829e7830dcde286576bc6d7adebea6f2df23ab80dd3a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f\"" Sep 12 06:04:52.474732 containerd[1602]: time="2025-09-12T06:04:52.474699227Z" level=info msg="StartContainer for \"161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f\"" Sep 12 06:04:52.476331 containerd[1602]: time="2025-09-12T06:04:52.476288896Z" level=info msg="connecting to shim 161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f" address="unix:///run/containerd/s/0c58d70909f3d9761095e3107990d9ea84ea242128efe52767af6fa6654e625a" protocol=ttrpc version=3 Sep 12 06:04:52.498783 systemd[1]: Started cri-containerd-161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f.scope - libcontainer container 161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f. Sep 12 06:04:52.555508 containerd[1602]: time="2025-09-12T06:04:52.555458690Z" level=info msg="StartContainer for \"161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f\" returns successfully" Sep 12 06:04:52.627463 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 06:04:52.628799 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 06:04:52.808162 kubelet[2758]: I0912 06:04:52.808022 2758 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f47717f-d4cf-4f49-acac-5116f5479f26-whisker-backend-key-pair\") pod \"6f47717f-d4cf-4f49-acac-5116f5479f26\" (UID: \"6f47717f-d4cf-4f49-acac-5116f5479f26\") " Sep 12 06:04:52.808756 kubelet[2758]: I0912 06:04:52.808725 2758 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f47717f-d4cf-4f49-acac-5116f5479f26-whisker-ca-bundle\") pod \"6f47717f-d4cf-4f49-acac-5116f5479f26\" (UID: \"6f47717f-d4cf-4f49-acac-5116f5479f26\") " Sep 12 06:04:52.808802 kubelet[2758]: I0912 06:04:52.808758 2758 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g88bg\" (UniqueName: \"kubernetes.io/projected/6f47717f-d4cf-4f49-acac-5116f5479f26-kube-api-access-g88bg\") pod \"6f47717f-d4cf-4f49-acac-5116f5479f26\" (UID: \"6f47717f-d4cf-4f49-acac-5116f5479f26\") " Sep 12 06:04:52.809555 kubelet[2758]: I0912 06:04:52.809522 2758 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f47717f-d4cf-4f49-acac-5116f5479f26-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6f47717f-d4cf-4f49-acac-5116f5479f26" (UID: "6f47717f-d4cf-4f49-acac-5116f5479f26"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 06:04:52.813022 kubelet[2758]: I0912 06:04:52.812856 2758 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f47717f-d4cf-4f49-acac-5116f5479f26-kube-api-access-g88bg" (OuterVolumeSpecName: "kube-api-access-g88bg") pod "6f47717f-d4cf-4f49-acac-5116f5479f26" (UID: "6f47717f-d4cf-4f49-acac-5116f5479f26"). InnerVolumeSpecName "kube-api-access-g88bg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 06:04:52.813022 kubelet[2758]: I0912 06:04:52.812939 2758 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f47717f-d4cf-4f49-acac-5116f5479f26-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6f47717f-d4cf-4f49-acac-5116f5479f26" (UID: "6f47717f-d4cf-4f49-acac-5116f5479f26"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 06:04:52.909045 kubelet[2758]: I0912 06:04:52.908989 2758 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f47717f-d4cf-4f49-acac-5116f5479f26-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 06:04:52.909045 kubelet[2758]: I0912 06:04:52.909026 2758 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f47717f-d4cf-4f49-acac-5116f5479f26-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 06:04:52.909045 kubelet[2758]: I0912 06:04:52.909035 2758 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g88bg\" (UniqueName: \"kubernetes.io/projected/6f47717f-d4cf-4f49-acac-5116f5479f26-kube-api-access-g88bg\") on node \"localhost\" DevicePath \"\"" Sep 12 06:04:52.975314 systemd[1]: Removed slice kubepods-besteffort-pod6f47717f_d4cf_4f49_acac_5116f5479f26.slice - libcontainer container kubepods-besteffort-pod6f47717f_d4cf_4f49_acac_5116f5479f26.slice. Sep 12 06:04:52.986378 kubelet[2758]: I0912 06:04:52.986073 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-blrdx" podStartSLOduration=1.691415457 podStartE2EDuration="18.98605867s" podCreationTimestamp="2025-09-12 06:04:34 +0000 UTC" firstStartedPulling="2025-09-12 06:04:35.130215184 +0000 UTC m=+17.771382901" lastFinishedPulling="2025-09-12 06:04:52.424858397 +0000 UTC m=+35.066026114" observedRunningTime="2025-09-12 06:04:52.984991613 +0000 UTC m=+35.626159350" watchObservedRunningTime="2025-09-12 06:04:52.98605867 +0000 UTC m=+35.627226387" Sep 12 06:04:53.258409 systemd[1]: Created slice kubepods-besteffort-pod43f88186_6516_44d4_859c_debdd05dd0b4.slice - libcontainer container kubepods-besteffort-pod43f88186_6516_44d4_859c_debdd05dd0b4.slice. Sep 12 06:04:53.312742 kubelet[2758]: I0912 06:04:53.312676 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43f88186-6516-44d4-859c-debdd05dd0b4-whisker-backend-key-pair\") pod \"whisker-76d5fc668d-rwlw2\" (UID: \"43f88186-6516-44d4-859c-debdd05dd0b4\") " pod="calico-system/whisker-76d5fc668d-rwlw2" Sep 12 06:04:53.312742 kubelet[2758]: I0912 06:04:53.312728 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43f88186-6516-44d4-859c-debdd05dd0b4-whisker-ca-bundle\") pod \"whisker-76d5fc668d-rwlw2\" (UID: \"43f88186-6516-44d4-859c-debdd05dd0b4\") " pod="calico-system/whisker-76d5fc668d-rwlw2" Sep 12 06:04:53.312742 kubelet[2758]: I0912 06:04:53.312748 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6qwf\" (UniqueName: \"kubernetes.io/projected/43f88186-6516-44d4-859c-debdd05dd0b4-kube-api-access-z6qwf\") pod \"whisker-76d5fc668d-rwlw2\" (UID: \"43f88186-6516-44d4-859c-debdd05dd0b4\") " pod="calico-system/whisker-76d5fc668d-rwlw2" Sep 12 06:04:53.343398 containerd[1602]: time="2025-09-12T06:04:53.343349288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f\" id:\"59bd7933309c435950ecfb58808131f45d02690675bffad73840a1cb80360ef3\" pid:3947 exit_status:1 exited_at:{seconds:1757657093 nanos:342956549}" Sep 12 06:04:53.433370 systemd[1]: var-lib-kubelet-pods-6f47717f\x2dd4cf\x2d4f49\x2dacac\x2d5116f5479f26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg88bg.mount: Deactivated successfully. Sep 12 06:04:53.433505 systemd[1]: var-lib-kubelet-pods-6f47717f\x2dd4cf\x2d4f49\x2dacac\x2d5116f5479f26-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 06:04:53.474537 kubelet[2758]: I0912 06:04:53.474495 2758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f47717f-d4cf-4f49-acac-5116f5479f26" path="/var/lib/kubelet/pods/6f47717f-d4cf-4f49-acac-5116f5479f26/volumes" Sep 12 06:04:53.561845 containerd[1602]: time="2025-09-12T06:04:53.561714184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d5fc668d-rwlw2,Uid:43f88186-6516-44d4-859c-debdd05dd0b4,Namespace:calico-system,Attempt:0,}" Sep 12 06:04:53.703297 systemd-networkd[1495]: calidb1f09a6522: Link UP Sep 12 06:04:53.704006 systemd-networkd[1495]: calidb1f09a6522: Gained carrier Sep 12 06:04:53.717103 containerd[1602]: 2025-09-12 06:04:53.585 [INFO][3962] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 06:04:53.717103 containerd[1602]: 2025-09-12 06:04:53.601 [INFO][3962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76d5fc668d--rwlw2-eth0 whisker-76d5fc668d- calico-system 43f88186-6516-44d4-859c-debdd05dd0b4 909 0 2025-09-12 06:04:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76d5fc668d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76d5fc668d-rwlw2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb1f09a6522 [] [] }} ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Namespace="calico-system" Pod="whisker-76d5fc668d-rwlw2" WorkloadEndpoint="localhost-k8s-whisker--76d5fc668d--rwlw2-" Sep 12 06:04:53.717103 containerd[1602]: 2025-09-12 06:04:53.601 [INFO][3962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Namespace="calico-system" Pod="whisker-76d5fc668d-rwlw2" WorkloadEndpoint="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" Sep 12 06:04:53.717103 containerd[1602]: 2025-09-12 06:04:53.659 [INFO][3978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" HandleID="k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Workload="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.660 [INFO][3978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" HandleID="k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Workload="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76d5fc668d-rwlw2", "timestamp":"2025-09-12 06:04:53.659815636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.660 [INFO][3978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.660 [INFO][3978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.661 [INFO][3978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.668 [INFO][3978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" host="localhost" Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.675 [INFO][3978] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.679 [INFO][3978] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.680 [INFO][3978] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.682 [INFO][3978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:53.717340 containerd[1602]: 2025-09-12 06:04:53.682 [INFO][3978] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" host="localhost" Sep 12 06:04:53.717562 containerd[1602]: 2025-09-12 06:04:53.683 [INFO][3978] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d Sep 12 06:04:53.717562 containerd[1602]: 2025-09-12 06:04:53.688 [INFO][3978] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" host="localhost" Sep 12 06:04:53.717562 containerd[1602]: 2025-09-12 06:04:53.692 [INFO][3978] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" host="localhost" Sep 12 06:04:53.717562 containerd[1602]: 2025-09-12 06:04:53.692 [INFO][3978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" host="localhost" Sep 12 06:04:53.717562 containerd[1602]: 2025-09-12 06:04:53.692 [INFO][3978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 06:04:53.717562 containerd[1602]: 2025-09-12 06:04:53.692 [INFO][3978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" HandleID="k8s-pod-network.853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Workload="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" Sep 12 06:04:53.717725 containerd[1602]: 2025-09-12 06:04:53.696 [INFO][3962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Namespace="calico-system" Pod="whisker-76d5fc668d-rwlw2" WorkloadEndpoint="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76d5fc668d--rwlw2-eth0", GenerateName:"whisker-76d5fc668d-", Namespace:"calico-system", SelfLink:"", UID:"43f88186-6516-44d4-859c-debdd05dd0b4", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76d5fc668d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76d5fc668d-rwlw2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb1f09a6522", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:53.717725 containerd[1602]: 2025-09-12 06:04:53.696 [INFO][3962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Namespace="calico-system" Pod="whisker-76d5fc668d-rwlw2" WorkloadEndpoint="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" Sep 12 06:04:53.717800 containerd[1602]: 2025-09-12 06:04:53.696 [INFO][3962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb1f09a6522 ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Namespace="calico-system" Pod="whisker-76d5fc668d-rwlw2" WorkloadEndpoint="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" Sep 12 06:04:53.717800 containerd[1602]: 2025-09-12 06:04:53.703 [INFO][3962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Namespace="calico-system" Pod="whisker-76d5fc668d-rwlw2" WorkloadEndpoint="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" Sep 12 06:04:53.717845 containerd[1602]: 2025-09-12 06:04:53.704 [INFO][3962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Namespace="calico-system" Pod="whisker-76d5fc668d-rwlw2" WorkloadEndpoint="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76d5fc668d--rwlw2-eth0", GenerateName:"whisker-76d5fc668d-", Namespace:"calico-system", SelfLink:"", UID:"43f88186-6516-44d4-859c-debdd05dd0b4", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76d5fc668d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d", Pod:"whisker-76d5fc668d-rwlw2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb1f09a6522", MAC:"96:15:95:57:39:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:53.717893 containerd[1602]: 2025-09-12 06:04:53.713 [INFO][3962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" Namespace="calico-system" Pod="whisker-76d5fc668d-rwlw2" WorkloadEndpoint="localhost-k8s-whisker--76d5fc668d--rwlw2-eth0" Sep 12 06:04:53.844591 containerd[1602]: time="2025-09-12T06:04:53.844470430Z" level=info msg="connecting to shim 853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d" address="unix:///run/containerd/s/a72d278bc5aa9c9f06237a8b69b5412c14556e69f5ece0121e5ece9e5fef3849" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:53.869783 systemd[1]: Started cri-containerd-853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d.scope - libcontainer container 853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d. Sep 12 06:04:53.883367 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 06:04:53.939303 containerd[1602]: time="2025-09-12T06:04:53.939257412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d5fc668d-rwlw2,Uid:43f88186-6516-44d4-859c-debdd05dd0b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d\"" Sep 12 06:04:53.940913 containerd[1602]: time="2025-09-12T06:04:53.940867789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 06:04:54.055687 containerd[1602]: time="2025-09-12T06:04:54.055605210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f\" id:\"0fa4e3ad46565bc1c3d520df8516748ac676a1df715777d1201b702b4b24077b\" pid:4049 exit_status:1 exited_at:{seconds:1757657094 nanos:55244823}" Sep 12 06:04:54.745830 systemd-networkd[1495]: calidb1f09a6522: Gained IPv6LL Sep 12 06:04:55.669372 containerd[1602]: time="2025-09-12T06:04:55.669303429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:55.670101 containerd[1602]: time="2025-09-12T06:04:55.670057066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 06:04:55.671295 containerd[1602]: time="2025-09-12T06:04:55.671254737Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:55.673145 containerd[1602]: time="2025-09-12T06:04:55.673105374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:55.673741 containerd[1602]: time="2025-09-12T06:04:55.673692789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.732790505s" Sep 12 06:04:55.673741 containerd[1602]: time="2025-09-12T06:04:55.673734437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 06:04:55.678062 containerd[1602]: time="2025-09-12T06:04:55.678012638Z" level=info msg="CreateContainer within sandbox \"853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 06:04:55.685754 containerd[1602]: time="2025-09-12T06:04:55.685705065Z" level=info msg="Container eecd031cc81390dfa9b66074d1cb7b2d7a702b8f89c0bd2bd1e3ff3a5c0bf695: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:55.693345 containerd[1602]: time="2025-09-12T06:04:55.693307012Z" level=info msg="CreateContainer within sandbox \"853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"eecd031cc81390dfa9b66074d1cb7b2d7a702b8f89c0bd2bd1e3ff3a5c0bf695\"" Sep 12 06:04:55.693812 containerd[1602]: time="2025-09-12T06:04:55.693778829Z" level=info msg="StartContainer for \"eecd031cc81390dfa9b66074d1cb7b2d7a702b8f89c0bd2bd1e3ff3a5c0bf695\"" Sep 12 06:04:55.694691 containerd[1602]: time="2025-09-12T06:04:55.694660315Z" level=info msg="connecting to shim eecd031cc81390dfa9b66074d1cb7b2d7a702b8f89c0bd2bd1e3ff3a5c0bf695" address="unix:///run/containerd/s/a72d278bc5aa9c9f06237a8b69b5412c14556e69f5ece0121e5ece9e5fef3849" protocol=ttrpc version=3 Sep 12 06:04:55.721762 systemd[1]: Started cri-containerd-eecd031cc81390dfa9b66074d1cb7b2d7a702b8f89c0bd2bd1e3ff3a5c0bf695.scope - libcontainer container eecd031cc81390dfa9b66074d1cb7b2d7a702b8f89c0bd2bd1e3ff3a5c0bf695. Sep 12 06:04:55.768237 containerd[1602]: time="2025-09-12T06:04:55.768199265Z" level=info msg="StartContainer for \"eecd031cc81390dfa9b66074d1cb7b2d7a702b8f89c0bd2bd1e3ff3a5c0bf695\" returns successfully" Sep 12 06:04:55.769580 containerd[1602]: time="2025-09-12T06:04:55.769556626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 06:04:56.472197 kubelet[2758]: E0912 06:04:56.471739 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:56.480726 containerd[1602]: time="2025-09-12T06:04:56.480480858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84959cb5d5-xxzxr,Uid:f725c81c-f9bd-4915-ab55-8f8f38b50c43,Namespace:calico-apiserver,Attempt:0,}" Sep 12 06:04:56.480726 containerd[1602]: time="2025-09-12T06:04:56.480508569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dsqq8,Uid:1060453b-4e80-449a-b65f-58e2fab02113,Namespace:kube-system,Attempt:0,}" Sep 12 06:04:56.480726 containerd[1602]: time="2025-09-12T06:04:56.480512367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84959cb5d5-7f7ms,Uid:9c6c3a0d-475d-4673-9f60-910cc63aa4d9,Namespace:calico-apiserver,Attempt:0,}" Sep 12 06:04:56.889292 systemd-networkd[1495]: cali21069915775: Link UP Sep 12 06:04:56.889510 systemd-networkd[1495]: cali21069915775: Gained carrier Sep 12 06:04:56.902559 containerd[1602]: 2025-09-12 06:04:56.528 [INFO][4252] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 06:04:56.902559 containerd[1602]: 2025-09-12 06:04:56.539 [INFO][4252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0 calico-apiserver-84959cb5d5- calico-apiserver f725c81c-f9bd-4915-ab55-8f8f38b50c43 838 0 2025-09-12 06:04:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84959cb5d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84959cb5d5-xxzxr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali21069915775 [] [] }} ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-xxzxr" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-" Sep 12 06:04:56.902559 containerd[1602]: 2025-09-12 06:04:56.539 [INFO][4252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-xxzxr" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" Sep 12 06:04:56.902559 containerd[1602]: 2025-09-12 06:04:56.571 [INFO][4300] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" HandleID="k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Workload="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.572 [INFO][4300] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" HandleID="k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Workload="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a35f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84959cb5d5-xxzxr", "timestamp":"2025-09-12 06:04:56.571905339 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.572 [INFO][4300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.572 [INFO][4300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.572 [INFO][4300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.579 [INFO][4300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" host="localhost" Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.583 [INFO][4300] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.586 [INFO][4300] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.588 [INFO][4300] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.590 [INFO][4300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:56.903408 containerd[1602]: 2025-09-12 06:04:56.590 [INFO][4300] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" host="localhost" Sep 12 06:04:56.904112 containerd[1602]: 2025-09-12 06:04:56.591 [INFO][4300] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df Sep 12 06:04:56.904112 containerd[1602]: 2025-09-12 06:04:56.877 [INFO][4300] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" host="localhost" Sep 12 06:04:56.904112 containerd[1602]: 2025-09-12 06:04:56.884 [INFO][4300] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" host="localhost" Sep 12 06:04:56.904112 containerd[1602]: 2025-09-12 06:04:56.884 [INFO][4300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" host="localhost" Sep 12 06:04:56.904112 containerd[1602]: 2025-09-12 06:04:56.884 [INFO][4300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 06:04:56.904112 containerd[1602]: 2025-09-12 06:04:56.884 [INFO][4300] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" HandleID="k8s-pod-network.edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Workload="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" Sep 12 06:04:56.904293 containerd[1602]: 2025-09-12 06:04:56.887 [INFO][4252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-xxzxr" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0", GenerateName:"calico-apiserver-84959cb5d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f725c81c-f9bd-4915-ab55-8f8f38b50c43", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84959cb5d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84959cb5d5-xxzxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21069915775", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:56.904351 containerd[1602]: 2025-09-12 06:04:56.887 [INFO][4252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-xxzxr" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" Sep 12 06:04:56.904351 containerd[1602]: 2025-09-12 06:04:56.887 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21069915775 ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-xxzxr" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" Sep 12 06:04:56.904351 containerd[1602]: 2025-09-12 06:04:56.890 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-xxzxr" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" Sep 12 06:04:56.904415 containerd[1602]: 2025-09-12 06:04:56.890 [INFO][4252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-xxzxr" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0", GenerateName:"calico-apiserver-84959cb5d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f725c81c-f9bd-4915-ab55-8f8f38b50c43", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84959cb5d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df", Pod:"calico-apiserver-84959cb5d5-xxzxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21069915775", MAC:"8e:41:d3:87:ff:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:56.904469 containerd[1602]: 2025-09-12 06:04:56.897 [INFO][4252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-xxzxr" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--xxzxr-eth0" Sep 12 06:04:56.929719 containerd[1602]: time="2025-09-12T06:04:56.929668047Z" level=info msg="connecting to shim edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df" address="unix:///run/containerd/s/46f334b466c19cea15e3bd861d0559b982aefe834a63126d0b3110408c5f7877" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:56.932015 systemd-networkd[1495]: cali84c7e7b956d: Link UP Sep 12 06:04:56.932868 systemd-networkd[1495]: cali84c7e7b956d: Gained carrier Sep 12 06:04:56.946456 containerd[1602]: 2025-09-12 06:04:56.525 [INFO][4264] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 06:04:56.946456 containerd[1602]: 2025-09-12 06:04:56.539 [INFO][4264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0 coredns-674b8bbfcf- kube-system 1060453b-4e80-449a-b65f-58e2fab02113 831 0 2025-09-12 06:04:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dsqq8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84c7e7b956d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dsqq8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dsqq8-" Sep 12 06:04:56.946456 containerd[1602]: 2025-09-12 06:04:56.540 [INFO][4264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dsqq8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" Sep 12 06:04:56.946456 containerd[1602]: 2025-09-12 06:04:56.578 [INFO][4302] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" HandleID="k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Workload="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.578 [INFO][4302] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" HandleID="k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Workload="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dsqq8", "timestamp":"2025-09-12 06:04:56.578309754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.578 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.886 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.886 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.898 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" host="localhost" Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.903 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.910 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.911 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.913 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:56.946685 containerd[1602]: 2025-09-12 06:04:56.913 [INFO][4302] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" host="localhost" Sep 12 06:04:56.946901 containerd[1602]: 2025-09-12 06:04:56.914 [INFO][4302] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf Sep 12 06:04:56.946901 containerd[1602]: 2025-09-12 06:04:56.921 [INFO][4302] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" host="localhost" Sep 12 06:04:56.946901 containerd[1602]: 2025-09-12 06:04:56.926 [INFO][4302] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" host="localhost" Sep 12 06:04:56.946901 containerd[1602]: 2025-09-12 06:04:56.927 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" host="localhost" Sep 12 06:04:56.946901 containerd[1602]: 2025-09-12 06:04:56.927 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 06:04:56.946901 containerd[1602]: 2025-09-12 06:04:56.927 [INFO][4302] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" HandleID="k8s-pod-network.e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Workload="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" Sep 12 06:04:56.947032 containerd[1602]: 2025-09-12 06:04:56.929 [INFO][4264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dsqq8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1060453b-4e80-449a-b65f-58e2fab02113", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dsqq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84c7e7b956d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:56.947173 containerd[1602]: 2025-09-12 06:04:56.930 [INFO][4264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dsqq8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" Sep 12 06:04:56.947173 containerd[1602]: 2025-09-12 06:04:56.930 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84c7e7b956d ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dsqq8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" Sep 12 06:04:56.947173 containerd[1602]: 2025-09-12 06:04:56.933 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dsqq8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" Sep 12 06:04:56.947235 containerd[1602]: 2025-09-12 06:04:56.934 [INFO][4264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dsqq8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1060453b-4e80-449a-b65f-58e2fab02113", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf", Pod:"coredns-674b8bbfcf-dsqq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84c7e7b956d", MAC:"96:fe:3e:19:1e:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:56.947235 containerd[1602]: 2025-09-12 06:04:56.943 [INFO][4264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" Namespace="kube-system" Pod="coredns-674b8bbfcf-dsqq8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dsqq8-eth0" Sep 12 06:04:56.964776 systemd[1]: Started cri-containerd-edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df.scope - libcontainer container edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df. Sep 12 06:04:56.984011 containerd[1602]: time="2025-09-12T06:04:56.983952597Z" level=info msg="connecting to shim e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf" address="unix:///run/containerd/s/bcedf9e737e278733393809c49ac86fb708d4fe5c09760cad971bd8ea9b1200a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:56.985713 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 06:04:57.020820 systemd[1]: Started cri-containerd-e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf.scope - libcontainer container e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf. Sep 12 06:04:57.032504 containerd[1602]: time="2025-09-12T06:04:57.032458010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84959cb5d5-xxzxr,Uid:f725c81c-f9bd-4915-ab55-8f8f38b50c43,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df\"" Sep 12 06:04:57.036595 systemd-networkd[1495]: cali8b1636c4f58: Link UP Sep 12 06:04:57.037704 systemd-networkd[1495]: cali8b1636c4f58: Gained carrier Sep 12 06:04:57.041611 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.526 [INFO][4270] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.538 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0 calico-apiserver-84959cb5d5- calico-apiserver 9c6c3a0d-475d-4673-9f60-910cc63aa4d9 836 0 2025-09-12 06:04:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84959cb5d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84959cb5d5-7f7ms eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b1636c4f58 [] [] }} ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-7f7ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.538 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-7f7ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.604 [INFO][4298] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" HandleID="k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Workload="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.604 [INFO][4298] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" HandleID="k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Workload="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004833e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84959cb5d5-7f7ms", "timestamp":"2025-09-12 06:04:56.604365237 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.604 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.927 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.927 [INFO][4298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:56.998 [INFO][4298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.005 [INFO][4298] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.011 [INFO][4298] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.014 [INFO][4298] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.016 [INFO][4298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.016 [INFO][4298] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.017 [INFO][4298] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443 Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.022 [INFO][4298] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.027 [INFO][4298] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.027 [INFO][4298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" host="localhost" Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.027 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 06:04:57.051802 containerd[1602]: 2025-09-12 06:04:57.027 [INFO][4298] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" HandleID="k8s-pod-network.406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Workload="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" Sep 12 06:04:57.052329 containerd[1602]: 2025-09-12 06:04:57.032 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-7f7ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0", GenerateName:"calico-apiserver-84959cb5d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c6c3a0d-475d-4673-9f60-910cc63aa4d9", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84959cb5d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84959cb5d5-7f7ms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b1636c4f58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:57.052329 containerd[1602]: 2025-09-12 06:04:57.032 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-7f7ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" Sep 12 06:04:57.052329 containerd[1602]: 2025-09-12 06:04:57.033 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b1636c4f58 ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-7f7ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" Sep 12 06:04:57.052329 containerd[1602]: 2025-09-12 06:04:57.038 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-7f7ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" Sep 12 06:04:57.052329 containerd[1602]: 2025-09-12 06:04:57.038 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-7f7ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0", GenerateName:"calico-apiserver-84959cb5d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c6c3a0d-475d-4673-9f60-910cc63aa4d9", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84959cb5d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443", Pod:"calico-apiserver-84959cb5d5-7f7ms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b1636c4f58", MAC:"ca:52:94:30:52:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:57.052329 containerd[1602]: 2025-09-12 06:04:57.048 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" Namespace="calico-apiserver" Pod="calico-apiserver-84959cb5d5-7f7ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--84959cb5d5--7f7ms-eth0" Sep 12 06:04:57.074723 containerd[1602]: time="2025-09-12T06:04:57.074673161Z" level=info msg="connecting to shim 406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443" address="unix:///run/containerd/s/3496e98c2ef5e04dc0f8e5ad4b8de61bc208f6ecac8264efde033f75905644d1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:57.075573 containerd[1602]: time="2025-09-12T06:04:57.075540490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dsqq8,Uid:1060453b-4e80-449a-b65f-58e2fab02113,Namespace:kube-system,Attempt:0,} returns sandbox id \"e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf\"" Sep 12 06:04:57.076561 kubelet[2758]: E0912 06:04:57.076522 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:57.082575 containerd[1602]: time="2025-09-12T06:04:57.082522429Z" level=info msg="CreateContainer within sandbox \"e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 06:04:57.096841 containerd[1602]: time="2025-09-12T06:04:57.095676545Z" level=info msg="Container 23378a43393dc3680db3ca120fcb4b99bba1e3c70ef855983ef540bb4b420d2f: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:57.103610 containerd[1602]: time="2025-09-12T06:04:57.103585436Z" level=info msg="CreateContainer within sandbox \"e64cc71f0b5c45abae64ae25f07ea9121e0813858c87203c7e32c1b5d5474fdf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23378a43393dc3680db3ca120fcb4b99bba1e3c70ef855983ef540bb4b420d2f\"" Sep 12 06:04:57.104196 containerd[1602]: time="2025-09-12T06:04:57.104177378Z" level=info msg="StartContainer for \"23378a43393dc3680db3ca120fcb4b99bba1e3c70ef855983ef540bb4b420d2f\"" Sep 12 06:04:57.105772 systemd[1]: Started cri-containerd-406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443.scope - libcontainer container 406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443. Sep 12 06:04:57.107378 containerd[1602]: time="2025-09-12T06:04:57.107356621Z" level=info msg="connecting to shim 23378a43393dc3680db3ca120fcb4b99bba1e3c70ef855983ef540bb4b420d2f" address="unix:///run/containerd/s/bcedf9e737e278733393809c49ac86fb708d4fe5c09760cad971bd8ea9b1200a" protocol=ttrpc version=3 Sep 12 06:04:57.119484 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 06:04:57.127755 systemd[1]: Started cri-containerd-23378a43393dc3680db3ca120fcb4b99bba1e3c70ef855983ef540bb4b420d2f.scope - libcontainer container 23378a43393dc3680db3ca120fcb4b99bba1e3c70ef855983ef540bb4b420d2f. Sep 12 06:04:57.165476 containerd[1602]: time="2025-09-12T06:04:57.165434366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84959cb5d5-7f7ms,Uid:9c6c3a0d-475d-4673-9f60-910cc63aa4d9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443\"" Sep 12 06:04:57.180234 containerd[1602]: time="2025-09-12T06:04:57.180182958Z" level=info msg="StartContainer for \"23378a43393dc3680db3ca120fcb4b99bba1e3c70ef855983ef540bb4b420d2f\" returns successfully" Sep 12 06:04:57.421792 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:37026.service - OpenSSH per-connection server daemon (10.0.0.1:37026). Sep 12 06:04:57.669831 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 37026 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:04:57.671899 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:04:57.678517 systemd-logind[1580]: New session 8 of user core. Sep 12 06:04:57.683754 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 06:04:57.842829 sshd[4549]: Connection closed by 10.0.0.1 port 37026 Sep 12 06:04:57.843354 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Sep 12 06:04:57.850050 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:37026.service: Deactivated successfully. Sep 12 06:04:57.852377 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 06:04:57.853469 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Sep 12 06:04:57.855564 systemd-logind[1580]: Removed session 8. Sep 12 06:04:57.960830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155497381.mount: Deactivated successfully. Sep 12 06:04:57.980343 containerd[1602]: time="2025-09-12T06:04:57.980291496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:57.981057 containerd[1602]: time="2025-09-12T06:04:57.981010076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 06:04:57.982366 containerd[1602]: time="2025-09-12T06:04:57.982316942Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:57.983402 kubelet[2758]: E0912 06:04:57.983370 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:57.986326 containerd[1602]: time="2025-09-12T06:04:57.986096874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:04:57.986727 containerd[1602]: time="2025-09-12T06:04:57.986690480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.217107145s" Sep 12 06:04:57.986727 containerd[1602]: time="2025-09-12T06:04:57.986721318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 06:04:57.988098 containerd[1602]: time="2025-09-12T06:04:57.988063008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 06:04:57.992100 containerd[1602]: time="2025-09-12T06:04:57.992073694Z" level=info msg="CreateContainer within sandbox \"853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 06:04:57.995103 kubelet[2758]: I0912 06:04:57.995033 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dsqq8" podStartSLOduration=35.995013669 podStartE2EDuration="35.995013669s" podCreationTimestamp="2025-09-12 06:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 06:04:57.994188197 +0000 UTC m=+40.635355914" watchObservedRunningTime="2025-09-12 06:04:57.995013669 +0000 UTC m=+40.636181376" Sep 12 06:04:58.004148 containerd[1602]: time="2025-09-12T06:04:58.004105111Z" level=info msg="Container f670f3af6c051cf70aeb44bd017fbeaa4ad882b320f06310b70421df782f917d: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:58.015282 containerd[1602]: time="2025-09-12T06:04:58.015242645Z" level=info msg="CreateContainer within sandbox \"853c78afe09bbfeae4c3ef99017cae6a971e3dab7e1e6f30a9dbddbb44ea765d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f670f3af6c051cf70aeb44bd017fbeaa4ad882b320f06310b70421df782f917d\"" Sep 12 06:04:58.015982 containerd[1602]: time="2025-09-12T06:04:58.015940937Z" level=info msg="StartContainer for \"f670f3af6c051cf70aeb44bd017fbeaa4ad882b320f06310b70421df782f917d\"" Sep 12 06:04:58.017009 containerd[1602]: time="2025-09-12T06:04:58.016975891Z" level=info msg="connecting to shim f670f3af6c051cf70aeb44bd017fbeaa4ad882b320f06310b70421df782f917d" address="unix:///run/containerd/s/a72d278bc5aa9c9f06237a8b69b5412c14556e69f5ece0121e5ece9e5fef3849" protocol=ttrpc version=3 Sep 12 06:04:58.067769 systemd[1]: Started cri-containerd-f670f3af6c051cf70aeb44bd017fbeaa4ad882b320f06310b70421df782f917d.scope - libcontainer container f670f3af6c051cf70aeb44bd017fbeaa4ad882b320f06310b70421df782f917d. Sep 12 06:04:58.118779 containerd[1602]: time="2025-09-12T06:04:58.118735453Z" level=info msg="StartContainer for \"f670f3af6c051cf70aeb44bd017fbeaa4ad882b320f06310b70421df782f917d\" returns successfully" Sep 12 06:04:58.201807 systemd-networkd[1495]: cali84c7e7b956d: Gained IPv6LL Sep 12 06:04:58.475040 kubelet[2758]: E0912 06:04:58.474970 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:58.475594 containerd[1602]: time="2025-09-12T06:04:58.475549979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5fzv6,Uid:5306b6c3-b830-4bb1-afab-d71396b947ec,Namespace:calico-system,Attempt:0,}" Sep 12 06:04:58.475671 containerd[1602]: time="2025-09-12T06:04:58.475586117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m2pzx,Uid:565b8e82-48fc-4998-a083-ad30ae1f437e,Namespace:kube-system,Attempt:0,}" Sep 12 06:04:58.521807 systemd-networkd[1495]: cali21069915775: Gained IPv6LL Sep 12 06:04:58.522715 systemd-networkd[1495]: cali8b1636c4f58: Gained IPv6LL Sep 12 06:04:58.615215 systemd-networkd[1495]: cali7806593c705: Link UP Sep 12 06:04:58.615507 systemd-networkd[1495]: cali7806593c705: Gained carrier Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.513 [INFO][4618] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.535 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--5fzv6-eth0 goldmane-54d579b49d- calico-system 5306b6c3-b830-4bb1-afab-d71396b947ec 842 0 2025-09-12 06:04:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-5fzv6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7806593c705 [] [] }} ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Namespace="calico-system" Pod="goldmane-54d579b49d-5fzv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5fzv6-" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.535 [INFO][4618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Namespace="calico-system" Pod="goldmane-54d579b49d-5fzv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.579 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" HandleID="k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Workload="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.580 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" HandleID="k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Workload="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000143480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-5fzv6", "timestamp":"2025-09-12 06:04:58.579975809 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.580 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.580 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.580 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.587 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.591 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.596 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.598 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.600 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.600 [INFO][4660] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.601 [INFO][4660] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9 Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.604 [INFO][4660] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.609 [INFO][4660] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.609 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" host="localhost" Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.609 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 06:04:58.628315 containerd[1602]: 2025-09-12 06:04:58.609 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" HandleID="k8s-pod-network.920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Workload="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" Sep 12 06:04:58.628907 containerd[1602]: 2025-09-12 06:04:58.613 [INFO][4618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Namespace="calico-system" Pod="goldmane-54d579b49d-5fzv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--5fzv6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5306b6c3-b830-4bb1-afab-d71396b947ec", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-5fzv6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7806593c705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:58.628907 containerd[1602]: 2025-09-12 06:04:58.613 [INFO][4618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Namespace="calico-system" Pod="goldmane-54d579b49d-5fzv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" Sep 12 06:04:58.628907 containerd[1602]: 2025-09-12 06:04:58.613 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7806593c705 ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Namespace="calico-system" Pod="goldmane-54d579b49d-5fzv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" Sep 12 06:04:58.628907 containerd[1602]: 2025-09-12 06:04:58.615 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Namespace="calico-system" Pod="goldmane-54d579b49d-5fzv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" Sep 12 06:04:58.628907 containerd[1602]: 2025-09-12 06:04:58.615 [INFO][4618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Namespace="calico-system" Pod="goldmane-54d579b49d-5fzv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--5fzv6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5306b6c3-b830-4bb1-afab-d71396b947ec", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9", Pod:"goldmane-54d579b49d-5fzv6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7806593c705", MAC:"7a:14:bf:3d:ca:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:58.628907 containerd[1602]: 2025-09-12 06:04:58.624 [INFO][4618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" Namespace="calico-system" Pod="goldmane-54d579b49d-5fzv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5fzv6-eth0" Sep 12 06:04:58.651578 containerd[1602]: time="2025-09-12T06:04:58.651534460Z" level=info msg="connecting to shim 920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9" address="unix:///run/containerd/s/0802f8e4b67204bd1e7c3be92352bd7d812636501ce8ac2a62c7aecd3fbd136f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:58.681773 systemd[1]: Started cri-containerd-920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9.scope - libcontainer container 920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9. Sep 12 06:04:58.698698 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 06:04:58.702628 kubelet[2758]: I0912 06:04:58.702591 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 06:04:58.703096 kubelet[2758]: E0912 06:04:58.703068 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:58.733212 systemd-networkd[1495]: cali040bb3a9d5a: Link UP Sep 12 06:04:58.735049 systemd-networkd[1495]: cali040bb3a9d5a: Gained carrier Sep 12 06:04:58.743175 containerd[1602]: time="2025-09-12T06:04:58.743102163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5fzv6,Uid:5306b6c3-b830-4bb1-afab-d71396b947ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9\"" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.521 [INFO][4625] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.539 [INFO][4625] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0 coredns-674b8bbfcf- kube-system 565b8e82-48fc-4998-a083-ad30ae1f437e 839 0 2025-09-12 06:04:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-m2pzx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali040bb3a9d5a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-m2pzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m2pzx-" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.540 [INFO][4625] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-m2pzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.581 [INFO][4666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" HandleID="k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Workload="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.581 [INFO][4666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" HandleID="k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Workload="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-m2pzx", "timestamp":"2025-09-12 06:04:58.581811648 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.582 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.609 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.609 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.688 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.696 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.700 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.703 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.707 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.707 [INFO][4666] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.709 [INFO][4666] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1 Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.716 [INFO][4666] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.723 [INFO][4666] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.723 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" host="localhost" Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.723 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 06:04:58.754360 containerd[1602]: 2025-09-12 06:04:58.723 [INFO][4666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" HandleID="k8s-pod-network.77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Workload="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" Sep 12 06:04:58.754952 containerd[1602]: 2025-09-12 06:04:58.729 [INFO][4625] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-m2pzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"565b8e82-48fc-4998-a083-ad30ae1f437e", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-m2pzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali040bb3a9d5a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:58.754952 containerd[1602]: 2025-09-12 06:04:58.729 [INFO][4625] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-m2pzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" Sep 12 06:04:58.754952 containerd[1602]: 2025-09-12 06:04:58.729 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali040bb3a9d5a ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-m2pzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" Sep 12 06:04:58.754952 containerd[1602]: 2025-09-12 06:04:58.735 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-m2pzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" Sep 12 06:04:58.754952 containerd[1602]: 2025-09-12 06:04:58.737 [INFO][4625] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-m2pzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"565b8e82-48fc-4998-a083-ad30ae1f437e", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1", Pod:"coredns-674b8bbfcf-m2pzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali040bb3a9d5a", MAC:"66:0b:d1:18:1a:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:04:58.754952 containerd[1602]: 2025-09-12 06:04:58.750 [INFO][4625] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-m2pzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m2pzx-eth0" Sep 12 06:04:58.775257 containerd[1602]: time="2025-09-12T06:04:58.775219303Z" level=info msg="connecting to shim 77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1" address="unix:///run/containerd/s/7cfa925874b91be9efb3d54002fd9d86c54bba3685953f336e04c8b90fd2b672" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:04:58.806782 systemd[1]: Started cri-containerd-77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1.scope - libcontainer container 77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1. Sep 12 06:04:58.820193 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 06:04:58.849781 containerd[1602]: time="2025-09-12T06:04:58.849745710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m2pzx,Uid:565b8e82-48fc-4998-a083-ad30ae1f437e,Namespace:kube-system,Attempt:0,} returns sandbox id \"77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1\"" Sep 12 06:04:58.850614 kubelet[2758]: E0912 06:04:58.850577 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:58.854927 containerd[1602]: time="2025-09-12T06:04:58.854893061Z" level=info msg="CreateContainer within sandbox \"77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 06:04:58.863352 containerd[1602]: time="2025-09-12T06:04:58.863305966Z" level=info msg="Container 018a3f9b2906e9e8cb63a35b9833e714a0fc8bf5a3ea7f44316a5c0631494c1d: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:04:58.869416 containerd[1602]: time="2025-09-12T06:04:58.869374578Z" level=info msg="CreateContainer within sandbox \"77ca8ed2699abe0e20847da30e866551bc4e7199a819529a4eb7cc34855469e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"018a3f9b2906e9e8cb63a35b9833e714a0fc8bf5a3ea7f44316a5c0631494c1d\"" Sep 12 06:04:58.869785 containerd[1602]: time="2025-09-12T06:04:58.869751265Z" level=info msg="StartContainer for \"018a3f9b2906e9e8cb63a35b9833e714a0fc8bf5a3ea7f44316a5c0631494c1d\"" Sep 12 06:04:58.870654 containerd[1602]: time="2025-09-12T06:04:58.870599108Z" level=info msg="connecting to shim 018a3f9b2906e9e8cb63a35b9833e714a0fc8bf5a3ea7f44316a5c0631494c1d" address="unix:///run/containerd/s/7cfa925874b91be9efb3d54002fd9d86c54bba3685953f336e04c8b90fd2b672" protocol=ttrpc version=3 Sep 12 06:04:58.893891 systemd[1]: Started cri-containerd-018a3f9b2906e9e8cb63a35b9833e714a0fc8bf5a3ea7f44316a5c0631494c1d.scope - libcontainer container 018a3f9b2906e9e8cb63a35b9833e714a0fc8bf5a3ea7f44316a5c0631494c1d. Sep 12 06:04:58.927756 containerd[1602]: time="2025-09-12T06:04:58.927717359Z" level=info msg="StartContainer for \"018a3f9b2906e9e8cb63a35b9833e714a0fc8bf5a3ea7f44316a5c0631494c1d\" returns successfully" Sep 12 06:04:58.987971 kubelet[2758]: E0912 06:04:58.987627 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:58.993539 kubelet[2758]: E0912 06:04:58.993460 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:58.993972 kubelet[2758]: E0912 06:04:58.993928 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:58.998201 kubelet[2758]: I0912 06:04:58.998049 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m2pzx" podStartSLOduration=35.998037894 podStartE2EDuration="35.998037894s" podCreationTimestamp="2025-09-12 06:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 06:04:58.997665705 +0000 UTC m=+41.638833422" watchObservedRunningTime="2025-09-12 06:04:58.998037894 +0000 UTC m=+41.639205611" Sep 12 06:04:59.021068 kubelet[2758]: I0912 06:04:59.020992 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76d5fc668d-rwlw2" podStartSLOduration=1.973902563 podStartE2EDuration="6.020975013s" podCreationTimestamp="2025-09-12 06:04:53 +0000 UTC" firstStartedPulling="2025-09-12 06:04:53.940594124 +0000 UTC m=+36.581761841" lastFinishedPulling="2025-09-12 06:04:57.987666584 +0000 UTC m=+40.628834291" observedRunningTime="2025-09-12 06:04:59.020126169 +0000 UTC m=+41.661293886" watchObservedRunningTime="2025-09-12 06:04:59.020975013 +0000 UTC m=+41.662142730" Sep 12 06:04:59.673825 systemd-networkd[1495]: cali7806593c705: Gained IPv6LL Sep 12 06:04:59.867509 systemd-networkd[1495]: cali040bb3a9d5a: Gained IPv6LL Sep 12 06:04:59.924290 systemd-networkd[1495]: vxlan.calico: Link UP Sep 12 06:04:59.924300 systemd-networkd[1495]: vxlan.calico: Gained carrier Sep 12 06:04:59.995980 kubelet[2758]: E0912 06:04:59.995943 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:04:59.996488 kubelet[2758]: E0912 06:04:59.996466 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:05:00.471430 containerd[1602]: time="2025-09-12T06:05:00.471373273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5c45568f-dl5wm,Uid:2b197ab3-c2aa-4a25-a200-7c47de4ee662,Namespace:calico-system,Attempt:0,}" Sep 12 06:05:00.997365 kubelet[2758]: E0912 06:05:00.997323 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:05:01.145897 systemd-networkd[1495]: vxlan.calico: Gained IPv6LL Sep 12 06:05:01.175661 containerd[1602]: time="2025-09-12T06:05:01.175412741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:01.176106 containerd[1602]: time="2025-09-12T06:05:01.176087299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 06:05:01.177302 containerd[1602]: time="2025-09-12T06:05:01.177280710Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:01.182491 containerd[1602]: time="2025-09-12T06:05:01.182438538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:01.183448 containerd[1602]: time="2025-09-12T06:05:01.183267394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.195164852s" Sep 12 06:05:01.183448 containerd[1602]: time="2025-09-12T06:05:01.183325764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 06:05:01.185385 containerd[1602]: time="2025-09-12T06:05:01.185331651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 06:05:01.189500 containerd[1602]: time="2025-09-12T06:05:01.189457100Z" level=info msg="CreateContainer within sandbox \"edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 06:05:01.209873 containerd[1602]: time="2025-09-12T06:05:01.209823432Z" level=info msg="Container f201c54c113eeca507ff69500834f3a0ebaae32e222ee298a017db4617e6a890: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:05:01.222108 containerd[1602]: time="2025-09-12T06:05:01.222061438Z" level=info msg="CreateContainer within sandbox \"edbd04187ec35f4294f8a1d6159460d96e4f1c72ce4862a1827dcdaef81517df\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f201c54c113eeca507ff69500834f3a0ebaae32e222ee298a017db4617e6a890\"" Sep 12 06:05:01.224286 containerd[1602]: time="2025-09-12T06:05:01.224234029Z" level=info msg="StartContainer for \"f201c54c113eeca507ff69500834f3a0ebaae32e222ee298a017db4617e6a890\"" Sep 12 06:05:01.225333 containerd[1602]: time="2025-09-12T06:05:01.225307254Z" level=info msg="connecting to shim f201c54c113eeca507ff69500834f3a0ebaae32e222ee298a017db4617e6a890" address="unix:///run/containerd/s/46f334b466c19cea15e3bd861d0559b982aefe834a63126d0b3110408c5f7877" protocol=ttrpc version=3 Sep 12 06:05:01.268608 systemd-networkd[1495]: cali532044e34e0: Link UP Sep 12 06:05:01.268837 systemd-networkd[1495]: cali532044e34e0: Gained carrier Sep 12 06:05:01.268885 systemd[1]: Started cri-containerd-f201c54c113eeca507ff69500834f3a0ebaae32e222ee298a017db4617e6a890.scope - libcontainer container f201c54c113eeca507ff69500834f3a0ebaae32e222ee298a017db4617e6a890. Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.188 [INFO][4970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0 calico-kube-controllers-6b5c45568f- calico-system 2b197ab3-c2aa-4a25-a200-7c47de4ee662 840 0 2025-09-12 06:04:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b5c45568f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6b5c45568f-dl5wm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali532044e34e0 [] [] }} ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Namespace="calico-system" Pod="calico-kube-controllers-6b5c45568f-dl5wm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.189 [INFO][4970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Namespace="calico-system" Pod="calico-kube-controllers-6b5c45568f-dl5wm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.228 [INFO][4991] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" HandleID="k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Workload="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.228 [INFO][4991] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" HandleID="k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Workload="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000dd640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6b5c45568f-dl5wm", "timestamp":"2025-09-12 06:05:01.228154241 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.228 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.228 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.228 [INFO][4991] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.235 [INFO][4991] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.239 [INFO][4991] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.244 [INFO][4991] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.245 [INFO][4991] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.247 [INFO][4991] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.247 [INFO][4991] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.248 [INFO][4991] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307 Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.252 [INFO][4991] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.259 [INFO][4991] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.259 [INFO][4991] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" host="localhost" Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.259 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 06:05:01.295612 containerd[1602]: 2025-09-12 06:05:01.259 [INFO][4991] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" HandleID="k8s-pod-network.71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Workload="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" Sep 12 06:05:01.296183 containerd[1602]: 2025-09-12 06:05:01.266 [INFO][4970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Namespace="calico-system" Pod="calico-kube-controllers-6b5c45568f-dl5wm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0", GenerateName:"calico-kube-controllers-6b5c45568f-", Namespace:"calico-system", SelfLink:"", UID:"2b197ab3-c2aa-4a25-a200-7c47de4ee662", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5c45568f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6b5c45568f-dl5wm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali532044e34e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:05:01.296183 containerd[1602]: 2025-09-12 06:05:01.266 [INFO][4970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Namespace="calico-system" Pod="calico-kube-controllers-6b5c45568f-dl5wm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" Sep 12 06:05:01.296183 containerd[1602]: 2025-09-12 06:05:01.266 [INFO][4970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali532044e34e0 ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Namespace="calico-system" Pod="calico-kube-controllers-6b5c45568f-dl5wm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" Sep 12 06:05:01.296183 containerd[1602]: 2025-09-12 06:05:01.268 [INFO][4970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Namespace="calico-system" Pod="calico-kube-controllers-6b5c45568f-dl5wm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" Sep 12 06:05:01.296183 containerd[1602]: 2025-09-12 06:05:01.271 [INFO][4970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Namespace="calico-system" Pod="calico-kube-controllers-6b5c45568f-dl5wm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0", GenerateName:"calico-kube-controllers-6b5c45568f-", Namespace:"calico-system", SelfLink:"", UID:"2b197ab3-c2aa-4a25-a200-7c47de4ee662", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5c45568f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307", Pod:"calico-kube-controllers-6b5c45568f-dl5wm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali532044e34e0", MAC:"52:33:93:3c:c8:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:05:01.296183 containerd[1602]: 2025-09-12 06:05:01.285 [INFO][4970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" Namespace="calico-system" Pod="calico-kube-controllers-6b5c45568f-dl5wm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b5c45568f--dl5wm-eth0" Sep 12 06:05:01.321545 containerd[1602]: time="2025-09-12T06:05:01.321505005Z" level=info msg="connecting to shim 71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307" address="unix:///run/containerd/s/248e384915e132e33696580fde634e8d2306bb0f7b925ef38d5f3eff6f104bc7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:05:01.327496 containerd[1602]: time="2025-09-12T06:05:01.327465139Z" level=info msg="StartContainer for \"f201c54c113eeca507ff69500834f3a0ebaae32e222ee298a017db4617e6a890\" returns successfully" Sep 12 06:05:01.354779 systemd[1]: Started cri-containerd-71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307.scope - libcontainer container 71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307. Sep 12 06:05:01.369064 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 06:05:01.411627 containerd[1602]: time="2025-09-12T06:05:01.411577650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5c45568f-dl5wm,Uid:2b197ab3-c2aa-4a25-a200-7c47de4ee662,Namespace:calico-system,Attempt:0,} returns sandbox id \"71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307\"" Sep 12 06:05:01.474695 containerd[1602]: time="2025-09-12T06:05:01.474596494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jggbx,Uid:f79f1c00-3e42-40ed-bb81-e1034be446c4,Namespace:calico-system,Attempt:0,}" Sep 12 06:05:01.584838 systemd-networkd[1495]: cali06d7263862d: Link UP Sep 12 06:05:01.587106 systemd-networkd[1495]: cali06d7263862d: Gained carrier Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.518 [INFO][5094] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jggbx-eth0 csi-node-driver- calico-system f79f1c00-3e42-40ed-bb81-e1034be446c4 737 0 2025-09-12 06:04:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jggbx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali06d7263862d [] [] }} ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Namespace="calico-system" Pod="csi-node-driver-jggbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--jggbx-" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.519 [INFO][5094] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Namespace="calico-system" Pod="csi-node-driver-jggbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--jggbx-eth0" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.545 [INFO][5103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" HandleID="k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Workload="localhost-k8s-csi--node--driver--jggbx-eth0" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.546 [INFO][5103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" HandleID="k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Workload="localhost-k8s-csi--node--driver--jggbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003258d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jggbx", "timestamp":"2025-09-12 06:05:01.545877665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.546 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.546 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.546 [INFO][5103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.552 [INFO][5103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.560 [INFO][5103] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.564 [INFO][5103] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.566 [INFO][5103] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.568 [INFO][5103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.568 [INFO][5103] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.569 [INFO][5103] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873 Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.573 [INFO][5103] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.578 [INFO][5103] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.578 [INFO][5103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" host="localhost" Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.578 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 06:05:01.604472 containerd[1602]: 2025-09-12 06:05:01.578 [INFO][5103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" HandleID="k8s-pod-network.0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Workload="localhost-k8s-csi--node--driver--jggbx-eth0" Sep 12 06:05:01.605051 containerd[1602]: 2025-09-12 06:05:01.582 [INFO][5094] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Namespace="calico-system" Pod="csi-node-driver-jggbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--jggbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jggbx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f79f1c00-3e42-40ed-bb81-e1034be446c4", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jggbx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06d7263862d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:05:01.605051 containerd[1602]: 2025-09-12 06:05:01.582 [INFO][5094] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Namespace="calico-system" Pod="csi-node-driver-jggbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--jggbx-eth0" Sep 12 06:05:01.605051 containerd[1602]: 2025-09-12 06:05:01.583 [INFO][5094] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06d7263862d ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Namespace="calico-system" Pod="csi-node-driver-jggbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--jggbx-eth0" Sep 12 06:05:01.605051 containerd[1602]: 2025-09-12 06:05:01.587 [INFO][5094] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Namespace="calico-system" Pod="csi-node-driver-jggbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--jggbx-eth0" Sep 12 06:05:01.605051 containerd[1602]: 2025-09-12 06:05:01.588 [INFO][5094] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Namespace="calico-system" Pod="csi-node-driver-jggbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--jggbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jggbx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f79f1c00-3e42-40ed-bb81-e1034be446c4", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 6, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873", Pod:"csi-node-driver-jggbx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06d7263862d", MAC:"4a:03:f5:1d:02:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 06:05:01.605051 containerd[1602]: 2025-09-12 06:05:01.600 [INFO][5094] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" Namespace="calico-system" Pod="csi-node-driver-jggbx" WorkloadEndpoint="localhost-k8s-csi--node--driver--jggbx-eth0" Sep 12 06:05:01.668219 containerd[1602]: time="2025-09-12T06:05:01.667217158Z" level=info msg="connecting to shim 0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873" address="unix:///run/containerd/s/e2c31cd936f4c301df59cc91ea952b5bf035b632bd81bbea5e76214875fa2247" namespace=k8s.io protocol=ttrpc version=3 Sep 12 06:05:01.670041 containerd[1602]: time="2025-09-12T06:05:01.670007789Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:01.671856 containerd[1602]: time="2025-09-12T06:05:01.671837986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 06:05:01.673947 containerd[1602]: time="2025-09-12T06:05:01.673923803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 488.552337ms" Sep 12 06:05:01.674029 containerd[1602]: time="2025-09-12T06:05:01.674015226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 06:05:01.676713 containerd[1602]: time="2025-09-12T06:05:01.676688797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 06:05:01.679665 containerd[1602]: time="2025-09-12T06:05:01.679399849Z" level=info msg="CreateContainer within sandbox \"406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 06:05:01.687277 containerd[1602]: time="2025-09-12T06:05:01.687234605Z" level=info msg="Container dcb6442a3adc62ffdde427baec7f4a4af12a5ab72b3b466d043ae83cca6c3507: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:05:01.695921 systemd[1]: Started cri-containerd-0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873.scope - libcontainer container 0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873. Sep 12 06:05:01.702426 containerd[1602]: time="2025-09-12T06:05:01.702390851Z" level=info msg="CreateContainer within sandbox \"406d982fbd050f9502e28f67cf57be315bed98187711dd85c6398abf47306443\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dcb6442a3adc62ffdde427baec7f4a4af12a5ab72b3b466d043ae83cca6c3507\"" Sep 12 06:05:01.704338 containerd[1602]: time="2025-09-12T06:05:01.703194410Z" level=info msg="StartContainer for \"dcb6442a3adc62ffdde427baec7f4a4af12a5ab72b3b466d043ae83cca6c3507\"" Sep 12 06:05:01.706264 containerd[1602]: time="2025-09-12T06:05:01.706211937Z" level=info msg="connecting to shim dcb6442a3adc62ffdde427baec7f4a4af12a5ab72b3b466d043ae83cca6c3507" address="unix:///run/containerd/s/3496e98c2ef5e04dc0f8e5ad4b8de61bc208f6ecac8264efde033f75905644d1" protocol=ttrpc version=3 Sep 12 06:05:01.715143 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 06:05:01.734390 systemd[1]: Started cri-containerd-dcb6442a3adc62ffdde427baec7f4a4af12a5ab72b3b466d043ae83cca6c3507.scope - libcontainer container dcb6442a3adc62ffdde427baec7f4a4af12a5ab72b3b466d043ae83cca6c3507. Sep 12 06:05:01.742560 containerd[1602]: time="2025-09-12T06:05:01.741741860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jggbx,Uid:f79f1c00-3e42-40ed-bb81-e1034be446c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873\"" Sep 12 06:05:01.794778 containerd[1602]: time="2025-09-12T06:05:01.794738601Z" level=info msg="StartContainer for \"dcb6442a3adc62ffdde427baec7f4a4af12a5ab72b3b466d043ae83cca6c3507\" returns successfully" Sep 12 06:05:02.016483 kubelet[2758]: I0912 06:05:02.015107 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84959cb5d5-7f7ms" podStartSLOduration=26.507446378 podStartE2EDuration="31.015091108s" podCreationTimestamp="2025-09-12 06:04:31 +0000 UTC" firstStartedPulling="2025-09-12 06:04:57.16721373 +0000 UTC m=+39.808381447" lastFinishedPulling="2025-09-12 06:05:01.674858469 +0000 UTC m=+44.316026177" observedRunningTime="2025-09-12 06:05:02.01338845 +0000 UTC m=+44.654556167" watchObservedRunningTime="2025-09-12 06:05:02.015091108 +0000 UTC m=+44.656258825" Sep 12 06:05:02.682334 systemd-networkd[1495]: cali532044e34e0: Gained IPv6LL Sep 12 06:05:02.858895 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:36094.service - OpenSSH per-connection server daemon (10.0.0.1:36094). Sep 12 06:05:02.873931 systemd-networkd[1495]: cali06d7263862d: Gained IPv6LL Sep 12 06:05:02.942742 sshd[5207]: Accepted publickey for core from 10.0.0.1 port 36094 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:02.944553 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:02.949428 systemd-logind[1580]: New session 9 of user core. Sep 12 06:05:02.957795 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 06:05:03.008724 kubelet[2758]: I0912 06:05:03.008342 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 06:05:03.027708 kubelet[2758]: I0912 06:05:03.025120 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84959cb5d5-xxzxr" podStartSLOduration=27.874719265 podStartE2EDuration="32.025101746s" podCreationTimestamp="2025-09-12 06:04:31 +0000 UTC" firstStartedPulling="2025-09-12 06:04:57.034603321 +0000 UTC m=+39.675771038" lastFinishedPulling="2025-09-12 06:05:01.184985802 +0000 UTC m=+43.826153519" observedRunningTime="2025-09-12 06:05:02.031613168 +0000 UTC m=+44.672780885" watchObservedRunningTime="2025-09-12 06:05:03.025101746 +0000 UTC m=+45.666269463" Sep 12 06:05:03.124073 sshd[5210]: Connection closed by 10.0.0.1 port 36094 Sep 12 06:05:03.124920 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:03.129283 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:36094.service: Deactivated successfully. Sep 12 06:05:03.132352 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 06:05:03.134245 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Sep 12 06:05:03.135754 systemd-logind[1580]: Removed session 9. Sep 12 06:05:05.705340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801333761.mount: Deactivated successfully. Sep 12 06:05:06.244948 containerd[1602]: time="2025-09-12T06:05:06.244870649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:06.246854 containerd[1602]: time="2025-09-12T06:05:06.246702979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 06:05:06.248052 containerd[1602]: time="2025-09-12T06:05:06.248020232Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:06.253399 containerd[1602]: time="2025-09-12T06:05:06.253365798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:06.254100 containerd[1602]: time="2025-09-12T06:05:06.254068027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.577226804s" Sep 12 06:05:06.254143 containerd[1602]: time="2025-09-12T06:05:06.254101640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 06:05:06.258857 containerd[1602]: time="2025-09-12T06:05:06.258812474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 06:05:06.275832 containerd[1602]: time="2025-09-12T06:05:06.275775283Z" level=info msg="CreateContainer within sandbox \"920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 06:05:06.304874 containerd[1602]: time="2025-09-12T06:05:06.304839562Z" level=info msg="Container da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:05:06.314167 containerd[1602]: time="2025-09-12T06:05:06.314102343Z" level=info msg="CreateContainer within sandbox \"920ba2a60791bed1f1cda2e8ad540449b4af949947a0b84f7fc2ede37c13c4d9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135\"" Sep 12 06:05:06.314652 containerd[1602]: time="2025-09-12T06:05:06.314604707Z" level=info msg="StartContainer for \"da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135\"" Sep 12 06:05:06.315567 containerd[1602]: time="2025-09-12T06:05:06.315539681Z" level=info msg="connecting to shim da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135" address="unix:///run/containerd/s/0802f8e4b67204bd1e7c3be92352bd7d812636501ce8ac2a62c7aecd3fbd136f" protocol=ttrpc version=3 Sep 12 06:05:06.341798 systemd[1]: Started cri-containerd-da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135.scope - libcontainer container da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135. Sep 12 06:05:06.711526 containerd[1602]: time="2025-09-12T06:05:06.711475962Z" level=info msg="StartContainer for \"da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135\" returns successfully" Sep 12 06:05:07.119243 containerd[1602]: time="2025-09-12T06:05:07.119107729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135\" id:\"59004b11ee29350e0ed946a52b194d48b0f90658b288a0bf919ba57049d113dd\" pid:5292 exit_status:1 exited_at:{seconds:1757657107 nanos:118472205}" Sep 12 06:05:07.240577 kubelet[2758]: I0912 06:05:07.240450 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-5fzv6" podStartSLOduration=25.728237926 podStartE2EDuration="33.240422236s" podCreationTimestamp="2025-09-12 06:04:34 +0000 UTC" firstStartedPulling="2025-09-12 06:04:58.744532991 +0000 UTC m=+41.385700698" lastFinishedPulling="2025-09-12 06:05:06.256717291 +0000 UTC m=+48.897885008" observedRunningTime="2025-09-12 06:05:07.239843449 +0000 UTC m=+49.881011386" watchObservedRunningTime="2025-09-12 06:05:07.240422236 +0000 UTC m=+49.881589953" Sep 12 06:05:08.104200 containerd[1602]: time="2025-09-12T06:05:08.104152654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135\" id:\"5aec4fcab66ca3795a08acafae887d263cd7c9b5b4d897cbb96a3e3324517970\" pid:5323 exit_status:1 exited_at:{seconds:1757657108 nanos:103810893}" Sep 12 06:05:08.142619 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:36102.service - OpenSSH per-connection server daemon (10.0.0.1:36102). Sep 12 06:05:08.197643 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 36102 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:08.199352 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:08.203686 systemd-logind[1580]: New session 10 of user core. Sep 12 06:05:08.210023 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 06:05:08.391751 sshd[5339]: Connection closed by 10.0.0.1 port 36102 Sep 12 06:05:08.392908 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:08.406474 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:36102.service: Deactivated successfully. Sep 12 06:05:08.408753 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 06:05:08.409599 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Sep 12 06:05:08.412277 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:36114.service - OpenSSH per-connection server daemon (10.0.0.1:36114). Sep 12 06:05:08.413487 systemd-logind[1580]: Removed session 10. Sep 12 06:05:08.472141 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 36114 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:08.473515 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:08.479791 systemd-logind[1580]: New session 11 of user core. Sep 12 06:05:08.488757 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 06:05:08.641417 sshd[5357]: Connection closed by 10.0.0.1 port 36114 Sep 12 06:05:08.643710 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:08.658658 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:36114.service: Deactivated successfully. Sep 12 06:05:08.661856 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 06:05:08.662997 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Sep 12 06:05:08.666537 systemd-logind[1580]: Removed session 11. Sep 12 06:05:08.668613 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:36128.service - OpenSSH per-connection server daemon (10.0.0.1:36128). Sep 12 06:05:08.726602 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 36128 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:08.728549 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:08.733252 systemd-logind[1580]: New session 12 of user core. Sep 12 06:05:08.744768 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 06:05:08.866778 sshd[5371]: Connection closed by 10.0.0.1 port 36128 Sep 12 06:05:08.867319 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:08.872806 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:36128.service: Deactivated successfully. Sep 12 06:05:08.875614 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 06:05:08.876350 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Sep 12 06:05:08.877576 systemd-logind[1580]: Removed session 12. Sep 12 06:05:10.008811 containerd[1602]: time="2025-09-12T06:05:10.008745811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:10.009668 containerd[1602]: time="2025-09-12T06:05:10.009586599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 06:05:10.011042 containerd[1602]: time="2025-09-12T06:05:10.011010923Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:10.012888 containerd[1602]: time="2025-09-12T06:05:10.012857518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:10.013601 containerd[1602]: time="2025-09-12T06:05:10.013568683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.754709101s" Sep 12 06:05:10.013601 containerd[1602]: time="2025-09-12T06:05:10.013598239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 06:05:10.014526 containerd[1602]: time="2025-09-12T06:05:10.014485094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 06:05:10.027529 containerd[1602]: time="2025-09-12T06:05:10.027494594Z" level=info msg="CreateContainer within sandbox \"71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 06:05:10.037972 containerd[1602]: time="2025-09-12T06:05:10.037929582Z" level=info msg="Container 5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:05:10.046562 containerd[1602]: time="2025-09-12T06:05:10.046523825Z" level=info msg="CreateContainer within sandbox \"71ed8ec3b96b1b4a1ecf87050ed9e53ee08762fbfd6507d1974fcfd9690ae307\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186\"" Sep 12 06:05:10.047202 containerd[1602]: time="2025-09-12T06:05:10.046975692Z" level=info msg="StartContainer for \"5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186\"" Sep 12 06:05:10.048190 containerd[1602]: time="2025-09-12T06:05:10.048166287Z" level=info msg="connecting to shim 5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186" address="unix:///run/containerd/s/248e384915e132e33696580fde634e8d2306bb0f7b925ef38d5f3eff6f104bc7" protocol=ttrpc version=3 Sep 12 06:05:10.071898 systemd[1]: Started cri-containerd-5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186.scope - libcontainer container 5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186. Sep 12 06:05:10.122665 containerd[1602]: time="2025-09-12T06:05:10.122603144Z" level=info msg="StartContainer for \"5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186\" returns successfully" Sep 12 06:05:11.044962 kubelet[2758]: I0912 06:05:11.044898 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b5c45568f-dl5wm" podStartSLOduration=27.443879556 podStartE2EDuration="36.044881672s" podCreationTimestamp="2025-09-12 06:04:35 +0000 UTC" firstStartedPulling="2025-09-12 06:05:01.41337704 +0000 UTC m=+44.054544747" lastFinishedPulling="2025-09-12 06:05:10.014379156 +0000 UTC m=+52.655546863" observedRunningTime="2025-09-12 06:05:11.043968638 +0000 UTC m=+53.685136355" watchObservedRunningTime="2025-09-12 06:05:11.044881672 +0000 UTC m=+53.686049389" Sep 12 06:05:11.074203 containerd[1602]: time="2025-09-12T06:05:11.074167863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186\" id:\"6e6ffd6ef6e0981c2bfacf372570fc9d2d8b18a7b66f962691b3470a064eced8\" pid:5453 exited_at:{seconds:1757657111 nanos:73611369}" Sep 12 06:05:11.905034 containerd[1602]: time="2025-09-12T06:05:11.904937962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:11.905654 containerd[1602]: time="2025-09-12T06:05:11.905610564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 06:05:11.906764 containerd[1602]: time="2025-09-12T06:05:11.906710378Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:11.908534 containerd[1602]: time="2025-09-12T06:05:11.908495959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:11.909017 containerd[1602]: time="2025-09-12T06:05:11.908983305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.89446609s" Sep 12 06:05:11.909017 containerd[1602]: time="2025-09-12T06:05:11.909011668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 06:05:11.914601 containerd[1602]: time="2025-09-12T06:05:11.914172084Z" level=info msg="CreateContainer within sandbox \"0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 06:05:11.923507 containerd[1602]: time="2025-09-12T06:05:11.923342247Z" level=info msg="Container 7489ada42155cbcdc217f65afb9bf57c9b1e7bf8333309772c257aab30fde80c: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:05:11.935810 containerd[1602]: time="2025-09-12T06:05:11.935759073Z" level=info msg="CreateContainer within sandbox \"0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7489ada42155cbcdc217f65afb9bf57c9b1e7bf8333309772c257aab30fde80c\"" Sep 12 06:05:11.936284 containerd[1602]: time="2025-09-12T06:05:11.936259693Z" level=info msg="StartContainer for \"7489ada42155cbcdc217f65afb9bf57c9b1e7bf8333309772c257aab30fde80c\"" Sep 12 06:05:11.937543 containerd[1602]: time="2025-09-12T06:05:11.937515690Z" level=info msg="connecting to shim 7489ada42155cbcdc217f65afb9bf57c9b1e7bf8333309772c257aab30fde80c" address="unix:///run/containerd/s/e2c31cd936f4c301df59cc91ea952b5bf035b632bd81bbea5e76214875fa2247" protocol=ttrpc version=3 Sep 12 06:05:11.961897 systemd[1]: Started cri-containerd-7489ada42155cbcdc217f65afb9bf57c9b1e7bf8333309772c257aab30fde80c.scope - libcontainer container 7489ada42155cbcdc217f65afb9bf57c9b1e7bf8333309772c257aab30fde80c. Sep 12 06:05:12.004897 containerd[1602]: time="2025-09-12T06:05:12.004845971Z" level=info msg="StartContainer for \"7489ada42155cbcdc217f65afb9bf57c9b1e7bf8333309772c257aab30fde80c\" returns successfully" Sep 12 06:05:12.005902 containerd[1602]: time="2025-09-12T06:05:12.005876876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 06:05:13.388384 kubelet[2758]: I0912 06:05:13.388338 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 06:05:13.847399 containerd[1602]: time="2025-09-12T06:05:13.847335507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 06:05:13.850731 containerd[1602]: time="2025-09-12T06:05:13.850688781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.844785655s" Sep 12 06:05:13.850731 containerd[1602]: time="2025-09-12T06:05:13.850728695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 06:05:13.853314 containerd[1602]: time="2025-09-12T06:05:13.853265166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:13.854286 containerd[1602]: time="2025-09-12T06:05:13.854223485Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:13.854983 containerd[1602]: time="2025-09-12T06:05:13.854940100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:05:13.856968 containerd[1602]: time="2025-09-12T06:05:13.856909696Z" level=info msg="CreateContainer within sandbox \"0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 06:05:13.866520 containerd[1602]: time="2025-09-12T06:05:13.866472055Z" level=info msg="Container 45432ab8226fc310dc1826df02d70a44168294e1ee9740db9d71ae41bb3a3244: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:05:13.877457 containerd[1602]: time="2025-09-12T06:05:13.877420012Z" level=info msg="CreateContainer within sandbox \"0cd25ce3988efc5fffc72a600b717293f8bc5e0bed44470c78ce1ccb165b0873\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"45432ab8226fc310dc1826df02d70a44168294e1ee9740db9d71ae41bb3a3244\"" Sep 12 06:05:13.877843 containerd[1602]: time="2025-09-12T06:05:13.877814653Z" level=info msg="StartContainer for \"45432ab8226fc310dc1826df02d70a44168294e1ee9740db9d71ae41bb3a3244\"" Sep 12 06:05:13.879079 containerd[1602]: time="2025-09-12T06:05:13.879054399Z" level=info msg="connecting to shim 45432ab8226fc310dc1826df02d70a44168294e1ee9740db9d71ae41bb3a3244" address="unix:///run/containerd/s/e2c31cd936f4c301df59cc91ea952b5bf035b632bd81bbea5e76214875fa2247" protocol=ttrpc version=3 Sep 12 06:05:13.887570 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:37734.service - OpenSSH per-connection server daemon (10.0.0.1:37734). Sep 12 06:05:13.902986 systemd[1]: Started cri-containerd-45432ab8226fc310dc1826df02d70a44168294e1ee9740db9d71ae41bb3a3244.scope - libcontainer container 45432ab8226fc310dc1826df02d70a44168294e1ee9740db9d71ae41bb3a3244. Sep 12 06:05:14.185854 sshd[5515]: Accepted publickey for core from 10.0.0.1 port 37734 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:14.188517 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:14.189773 containerd[1602]: time="2025-09-12T06:05:14.189713382Z" level=info msg="StartContainer for \"45432ab8226fc310dc1826df02d70a44168294e1ee9740db9d71ae41bb3a3244\" returns successfully" Sep 12 06:05:14.194148 systemd-logind[1580]: New session 13 of user core. Sep 12 06:05:14.198779 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 06:05:14.357538 sshd[5545]: Connection closed by 10.0.0.1 port 37734 Sep 12 06:05:14.358899 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:14.364077 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:37734.service: Deactivated successfully. Sep 12 06:05:14.366866 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 06:05:14.367803 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Sep 12 06:05:14.369258 systemd-logind[1580]: Removed session 13. Sep 12 06:05:14.543630 kubelet[2758]: I0912 06:05:14.543526 2758 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 06:05:14.544576 kubelet[2758]: I0912 06:05:14.544544 2758 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 06:05:15.202677 kubelet[2758]: I0912 06:05:15.202435 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jggbx" podStartSLOduration=28.107643508 podStartE2EDuration="40.202262466s" podCreationTimestamp="2025-09-12 06:04:35 +0000 UTC" firstStartedPulling="2025-09-12 06:05:01.756792339 +0000 UTC m=+44.397960056" lastFinishedPulling="2025-09-12 06:05:13.851411297 +0000 UTC m=+56.492579014" observedRunningTime="2025-09-12 06:05:15.201523619 +0000 UTC m=+57.842691336" watchObservedRunningTime="2025-09-12 06:05:15.202262466 +0000 UTC m=+57.843430183" Sep 12 06:05:19.378564 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:37740.service - OpenSSH per-connection server daemon (10.0.0.1:37740). Sep 12 06:05:19.441884 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 37740 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:19.443620 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:19.448051 systemd-logind[1580]: New session 14 of user core. Sep 12 06:05:19.454764 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 06:05:19.581130 sshd[5569]: Connection closed by 10.0.0.1 port 37740 Sep 12 06:05:19.581499 sshd-session[5566]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:19.586333 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:37740.service: Deactivated successfully. Sep 12 06:05:19.588613 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 06:05:19.589469 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Sep 12 06:05:19.590668 systemd-logind[1580]: Removed session 14. Sep 12 06:05:24.052132 containerd[1602]: time="2025-09-12T06:05:24.052078894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f\" id:\"65abfb4cba4ffb781e2fe98c30628e86e91a8f1c2df41e8f47e0e1d13db913d8\" pid:5603 exited_at:{seconds:1757657124 nanos:51760970}" Sep 12 06:05:24.594491 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:49034.service - OpenSSH per-connection server daemon (10.0.0.1:49034). Sep 12 06:05:24.673620 sshd[5618]: Accepted publickey for core from 10.0.0.1 port 49034 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:24.676409 sshd-session[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:24.681324 systemd-logind[1580]: New session 15 of user core. Sep 12 06:05:24.686813 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 06:05:24.724856 containerd[1602]: time="2025-09-12T06:05:24.724817230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135\" id:\"520e3d8add06a88f31fc4e02b56ba877f5c31e84cb67f4a7e94b886e69d961e7\" pid:5634 exited_at:{seconds:1757657124 nanos:724530756}" Sep 12 06:05:24.797964 sshd[5644]: Connection closed by 10.0.0.1 port 49034 Sep 12 06:05:24.798362 sshd-session[5618]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:24.803475 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:49034.service: Deactivated successfully. Sep 12 06:05:24.805603 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 06:05:24.806550 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Sep 12 06:05:24.808065 systemd-logind[1580]: Removed session 15. Sep 12 06:05:27.471989 kubelet[2758]: E0912 06:05:27.471933 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:05:29.818770 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:49050.service - OpenSSH per-connection server daemon (10.0.0.1:49050). Sep 12 06:05:29.880336 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 49050 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:29.881593 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:29.886313 systemd-logind[1580]: New session 16 of user core. Sep 12 06:05:29.893756 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 06:05:30.006539 sshd[5663]: Connection closed by 10.0.0.1 port 49050 Sep 12 06:05:30.006940 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:30.010925 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:49050.service: Deactivated successfully. Sep 12 06:05:30.013195 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 06:05:30.015418 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Sep 12 06:05:30.016374 systemd-logind[1580]: Removed session 16. Sep 12 06:05:30.471683 kubelet[2758]: E0912 06:05:30.471617 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:05:34.471773 kubelet[2758]: E0912 06:05:34.471731 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:05:35.023697 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:56642.service - OpenSSH per-connection server daemon (10.0.0.1:56642). Sep 12 06:05:35.077177 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 56642 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:35.078487 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:35.082839 systemd-logind[1580]: New session 17 of user core. Sep 12 06:05:35.090795 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 06:05:35.232933 sshd[5680]: Connection closed by 10.0.0.1 port 56642 Sep 12 06:05:35.235861 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:35.247935 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:56654.service - OpenSSH per-connection server daemon (10.0.0.1:56654). Sep 12 06:05:35.249284 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:56642.service: Deactivated successfully. Sep 12 06:05:35.253191 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 06:05:35.256319 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Sep 12 06:05:35.257766 systemd-logind[1580]: Removed session 17. Sep 12 06:05:35.301218 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 56654 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:35.303012 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:35.308691 systemd-logind[1580]: New session 18 of user core. Sep 12 06:05:35.318785 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 06:05:35.548499 sshd[5696]: Connection closed by 10.0.0.1 port 56654 Sep 12 06:05:35.548926 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:35.560061 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:56654.service: Deactivated successfully. Sep 12 06:05:35.562261 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 06:05:35.564795 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Sep 12 06:05:35.566417 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:56658.service - OpenSSH per-connection server daemon (10.0.0.1:56658). Sep 12 06:05:35.567621 systemd-logind[1580]: Removed session 18. Sep 12 06:05:35.646285 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 56658 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:35.651319 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:35.661807 systemd-logind[1580]: New session 19 of user core. Sep 12 06:05:35.671034 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 06:05:36.182384 sshd[5711]: Connection closed by 10.0.0.1 port 56658 Sep 12 06:05:36.182891 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:36.195405 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:56658.service: Deactivated successfully. Sep 12 06:05:36.198480 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 06:05:36.200737 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Sep 12 06:05:36.203389 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:56660.service - OpenSSH per-connection server daemon (10.0.0.1:56660). Sep 12 06:05:36.204819 systemd-logind[1580]: Removed session 19. Sep 12 06:05:36.257346 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 56660 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:36.258621 sshd-session[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:36.263110 systemd-logind[1580]: New session 20 of user core. Sep 12 06:05:36.269763 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 06:05:36.471344 kubelet[2758]: E0912 06:05:36.471216 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 06:05:36.591289 sshd[5738]: Connection closed by 10.0.0.1 port 56660 Sep 12 06:05:36.591746 sshd-session[5734]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:36.602836 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:56660.service: Deactivated successfully. Sep 12 06:05:36.605104 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 06:05:36.606870 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Sep 12 06:05:36.612877 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:56668.service - OpenSSH per-connection server daemon (10.0.0.1:56668). Sep 12 06:05:36.613819 systemd-logind[1580]: Removed session 20. Sep 12 06:05:36.659008 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 56668 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:36.660825 sshd-session[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:36.665825 systemd-logind[1580]: New session 21 of user core. Sep 12 06:05:36.680822 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 06:05:36.794033 sshd[5752]: Connection closed by 10.0.0.1 port 56668 Sep 12 06:05:36.794289 sshd-session[5749]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:36.799412 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:56668.service: Deactivated successfully. Sep 12 06:05:36.801620 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 06:05:36.802670 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Sep 12 06:05:36.803821 systemd-logind[1580]: Removed session 21. Sep 12 06:05:38.102677 containerd[1602]: time="2025-09-12T06:05:38.102611337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da8d50268aad3e9904db29fde9e704b1d961d80ad52b8247d80bbf7726ca1135\" id:\"3936010adee37c54e9828e54650aa5ad45e7c3fca06277d363efbc8b05c643ca\" pid:5777 exited_at:{seconds:1757657138 nanos:102314168}" Sep 12 06:05:41.075194 containerd[1602]: time="2025-09-12T06:05:41.075055189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186\" id:\"3c1b3df6ff95033663659acac84223e115d55fe189645391b6bb44f8a63518f1\" pid:5808 exited_at:{seconds:1757657141 nanos:74868722}" Sep 12 06:05:41.806654 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:37752.service - OpenSSH per-connection server daemon (10.0.0.1:37752). Sep 12 06:05:41.857135 sshd[5823]: Accepted publickey for core from 10.0.0.1 port 37752 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:41.858377 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:41.862715 systemd-logind[1580]: New session 22 of user core. Sep 12 06:05:41.872754 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 06:05:41.986809 sshd[5826]: Connection closed by 10.0.0.1 port 37752 Sep 12 06:05:41.987145 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:41.991992 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:37752.service: Deactivated successfully. Sep 12 06:05:41.994260 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 06:05:41.995162 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Sep 12 06:05:41.996345 systemd-logind[1580]: Removed session 22. Sep 12 06:05:47.004339 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:37758.service - OpenSSH per-connection server daemon (10.0.0.1:37758). Sep 12 06:05:47.054250 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 37758 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:47.055458 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:47.059514 systemd-logind[1580]: New session 23 of user core. Sep 12 06:05:47.066773 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 06:05:47.175273 sshd[5842]: Connection closed by 10.0.0.1 port 37758 Sep 12 06:05:47.175660 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:47.180335 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:37758.service: Deactivated successfully. Sep 12 06:05:47.182719 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 06:05:47.183504 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Sep 12 06:05:47.184669 systemd-logind[1580]: Removed session 23. Sep 12 06:05:50.415892 containerd[1602]: time="2025-09-12T06:05:50.415846854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cde23952d739bbf814ddfd7c359cecb560102485fd4846635b3be405ad29186\" id:\"fb3e98ab07295d0bdaa1ee6a1f09e184128de4389fbc41d54e2fe5e7c32fce13\" pid:5865 exited_at:{seconds:1757657150 nanos:415580748}" Sep 12 06:05:52.187816 systemd[1]: Started sshd@23-10.0.0.150:22-10.0.0.1:44618.service - OpenSSH per-connection server daemon (10.0.0.1:44618). Sep 12 06:05:52.267666 sshd[5877]: Accepted publickey for core from 10.0.0.1 port 44618 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 06:05:52.274659 sshd-session[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:05:52.283772 systemd-logind[1580]: New session 24 of user core. Sep 12 06:05:52.290817 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 06:05:52.461718 sshd[5880]: Connection closed by 10.0.0.1 port 44618 Sep 12 06:05:52.463835 sshd-session[5877]: pam_unix(sshd:session): session closed for user core Sep 12 06:05:52.467964 systemd[1]: sshd@23-10.0.0.150:22-10.0.0.1:44618.service: Deactivated successfully. Sep 12 06:05:52.470252 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 06:05:52.471247 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Sep 12 06:05:52.472749 systemd-logind[1580]: Removed session 24. Sep 12 06:05:54.052695 containerd[1602]: time="2025-09-12T06:05:54.052523803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"161783db8b0b90af42a794808305f9b6a3b7e53f77a4ab77bec0067be40c506f\" id:\"dcd713c99bbc96e8f4da5e5b2f356001d21115bde1bd79a49289093a433b9ff0\" pid:5903 exited_at:{seconds:1757657154 nanos:52220848}"