Sep 5 00:36:32.847505 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:12:48 -00 2025 Sep 5 00:36:32.847533 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:36:32.847542 kernel: BIOS-provided physical RAM map: Sep 5 00:36:32.847549 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 00:36:32.847555 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 00:36:32.847562 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 00:36:32.847569 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 00:36:32.847576 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 00:36:32.847587 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:36:32.847594 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 00:36:32.847601 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:36:32.847607 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 00:36:32.847614 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:36:32.847620 kernel: NX (Execute Disable) protection: active Sep 5 00:36:32.847631 kernel: APIC: Static calls initialized Sep 5 00:36:32.847638 kernel: SMBIOS 2.8 present. Sep 5 00:36:32.847648 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 00:36:32.847655 kernel: DMI: Memory slots populated: 1/1 Sep 5 00:36:32.847662 kernel: Hypervisor detected: KVM Sep 5 00:36:32.847669 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:36:32.847676 kernel: kvm-clock: using sched offset of 4674971471 cycles Sep 5 00:36:32.847684 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:36:32.847691 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:36:32.847701 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:36:32.847708 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:36:32.847716 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 00:36:32.847723 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 00:36:32.847730 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:36:32.847738 kernel: Using GB pages for direct mapping Sep 5 00:36:32.847745 kernel: ACPI: Early table checksum verification disabled Sep 5 00:36:32.847752 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 00:36:32.847759 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:32.847769 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:32.847776 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:32.847783 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 00:36:32.847790 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:32.847798 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:32.847805 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:32.847812 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:32.847820 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 00:36:32.847833 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 00:36:32.847840 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 00:36:32.847848 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 00:36:32.847855 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 00:36:32.847863 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 00:36:32.847870 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 00:36:32.847880 kernel: No NUMA configuration found Sep 5 00:36:32.847902 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 00:36:32.847909 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 5 00:36:32.847917 kernel: Zone ranges: Sep 5 00:36:32.847927 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:36:32.847943 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 00:36:32.847953 kernel: Normal empty Sep 5 00:36:32.847965 kernel: Device empty Sep 5 00:36:32.847977 kernel: Movable zone start for each node Sep 5 00:36:32.847986 kernel: Early memory node ranges Sep 5 00:36:32.847999 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 00:36:32.848008 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 00:36:32.848016 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 00:36:32.848026 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:36:32.848035 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 00:36:32.848044 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 00:36:32.848053 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:36:32.848066 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:36:32.848074 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:36:32.848086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:36:32.848095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:36:32.848108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:36:32.848116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:36:32.848124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:36:32.848131 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:36:32.848139 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:36:32.848146 kernel: TSC deadline timer available Sep 5 00:36:32.848153 kernel: CPU topo: Max. logical packages: 1 Sep 5 00:36:32.848163 kernel: CPU topo: Max. logical dies: 1 Sep 5 00:36:32.848170 kernel: CPU topo: Max. dies per package: 1 Sep 5 00:36:32.848178 kernel: CPU topo: Max. threads per core: 1 Sep 5 00:36:32.848185 kernel: CPU topo: Num. cores per package: 4 Sep 5 00:36:32.848192 kernel: CPU topo: Num. threads per package: 4 Sep 5 00:36:32.848200 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 5 00:36:32.848207 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:36:32.848215 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:36:32.848224 kernel: kvm-guest: setup PV sched yield Sep 5 00:36:32.848242 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 00:36:32.848250 kernel: Booting paravirtualized kernel on KVM Sep 5 00:36:32.848258 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:36:32.848265 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:36:32.848273 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 5 00:36:32.848280 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 5 00:36:32.848287 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:36:32.848294 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:36:32.848302 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:36:32.848313 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:36:32.848321 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:36:32.848328 kernel: random: crng init done Sep 5 00:36:32.848336 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:36:32.848344 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:36:32.848351 kernel: Fallback order for Node 0: 0 Sep 5 00:36:32.848358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 5 00:36:32.848366 kernel: Policy zone: DMA32 Sep 5 00:36:32.848380 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:36:32.848387 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:36:32.848404 kernel: ftrace: allocating 40102 entries in 157 pages Sep 5 00:36:32.848412 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 00:36:32.848419 kernel: Dynamic Preempt: voluntary Sep 5 00:36:32.848427 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:36:32.848435 kernel: rcu: RCU event tracing is enabled. Sep 5 00:36:32.848442 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:36:32.848450 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:36:32.848462 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:36:32.848470 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:36:32.848478 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:36:32.848485 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:36:32.848493 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:36:32.848500 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:36:32.848508 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:36:32.848515 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:36:32.848523 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:36:32.848540 kernel: Console: colour VGA+ 80x25 Sep 5 00:36:32.848548 kernel: printk: legacy console [ttyS0] enabled Sep 5 00:36:32.848556 kernel: ACPI: Core revision 20240827 Sep 5 00:36:32.848566 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:36:32.848574 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:36:32.848582 kernel: x2apic enabled Sep 5 00:36:32.848590 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:36:32.848600 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:36:32.848611 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:36:32.848621 kernel: kvm-guest: setup PV IPIs Sep 5 00:36:32.848629 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:36:32.848638 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:36:32.848649 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:36:32.848665 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:36:32.848674 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:36:32.848682 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:36:32.848690 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:36:32.848701 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:36:32.848709 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:36:32.848716 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:36:32.848724 kernel: active return thunk: retbleed_return_thunk Sep 5 00:36:32.848732 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:36:32.848739 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:36:32.848747 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:36:32.848755 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:36:32.848770 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:36:32.848778 kernel: active return thunk: srso_return_thunk Sep 5 00:36:32.848785 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:36:32.848793 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:36:32.848801 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:36:32.848809 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:36:32.848816 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:36:32.848824 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:36:32.848832 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:36:32.848841 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:36:32.848849 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 00:36:32.848857 kernel: landlock: Up and running. Sep 5 00:36:32.848864 kernel: SELinux: Initializing. Sep 5 00:36:32.848874 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:36:32.848882 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:36:32.848911 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:36:32.848919 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:36:32.848926 kernel: ... version: 0 Sep 5 00:36:32.848937 kernel: ... bit width: 48 Sep 5 00:36:32.848944 kernel: ... generic registers: 6 Sep 5 00:36:32.848952 kernel: ... value mask: 0000ffffffffffff Sep 5 00:36:32.848960 kernel: ... max period: 00007fffffffffff Sep 5 00:36:32.848968 kernel: ... fixed-purpose events: 0 Sep 5 00:36:32.848975 kernel: ... event mask: 000000000000003f Sep 5 00:36:32.848983 kernel: signal: max sigframe size: 1776 Sep 5 00:36:32.848991 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:36:32.848999 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:36:32.849009 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 00:36:32.849020 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:36:32.849028 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:36:32.849036 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:36:32.849043 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:36:32.849052 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:36:32.849061 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2428K rwdata, 9956K rodata, 54044K init, 2924K bss, 136904K reserved, 0K cma-reserved) Sep 5 00:36:32.849069 kernel: devtmpfs: initialized Sep 5 00:36:32.849077 kernel: x86/mm: Memory block size: 128MB Sep 5 00:36:32.849088 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:36:32.849096 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:36:32.849104 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:36:32.849112 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:36:32.849120 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:36:32.849128 kernel: audit: type=2000 audit(1757032589.312:1): state=initialized audit_enabled=0 res=1 Sep 5 00:36:32.849135 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:36:32.849143 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:36:32.849151 kernel: cpuidle: using governor menu Sep 5 00:36:32.849161 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:36:32.849169 kernel: dca service started, version 1.12.1 Sep 5 00:36:32.849177 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 5 00:36:32.849185 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:36:32.849192 kernel: PCI: Using configuration type 1 for base access Sep 5 00:36:32.849200 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:36:32.849208 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:36:32.849216 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:36:32.849224 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:36:32.849234 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:36:32.849242 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:36:32.849250 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:36:32.849257 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:36:32.849265 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:36:32.849275 kernel: ACPI: Interpreter enabled Sep 5 00:36:32.849285 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:36:32.849294 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:36:32.849304 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:36:32.849317 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:36:32.849327 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:36:32.849337 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:36:32.849656 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:36:32.849788 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:36:32.849933 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:36:32.849946 kernel: PCI host bridge to bus 0000:00 Sep 5 00:36:32.850094 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:36:32.850210 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:36:32.850347 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:36:32.850479 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:36:32.850600 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:36:32.850710 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 00:36:32.850819 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:36:32.850999 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 00:36:32.851140 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 5 00:36:32.851262 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 5 00:36:32.851380 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 5 00:36:32.851525 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 5 00:36:32.851646 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:36:32.851787 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 00:36:32.851933 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 5 00:36:32.852057 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 5 00:36:32.852177 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 00:36:32.852318 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 00:36:32.852473 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 5 00:36:32.852597 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 5 00:36:32.852723 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 00:36:32.852864 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 00:36:32.853015 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 5 00:36:32.853143 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 5 00:36:32.853266 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 00:36:32.853444 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 5 00:36:32.853704 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 00:36:32.853834 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:36:32.853994 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 00:36:32.854142 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 5 00:36:32.854269 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 5 00:36:32.854436 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 00:36:32.854575 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 5 00:36:32.854592 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:36:32.854600 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:36:32.854609 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:36:32.854617 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:36:32.854625 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:36:32.854633 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:36:32.854642 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:36:32.854650 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:36:32.854658 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:36:32.854668 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:36:32.854677 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:36:32.854685 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:36:32.854693 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:36:32.854701 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:36:32.854710 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:36:32.854718 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:36:32.854726 kernel: iommu: Default domain type: Translated Sep 5 00:36:32.854735 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:36:32.854745 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:36:32.854753 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:36:32.854761 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 00:36:32.854769 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 00:36:32.854944 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:36:32.855070 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:36:32.855326 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:36:32.855339 kernel: vgaarb: loaded Sep 5 00:36:32.855347 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:36:32.855360 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:36:32.855368 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:36:32.855376 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:36:32.855384 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:36:32.855392 kernel: pnp: PnP ACPI init Sep 5 00:36:32.855569 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:36:32.855582 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:36:32.855590 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:36:32.855602 kernel: NET: Registered PF_INET protocol family Sep 5 00:36:32.855610 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:36:32.855618 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:36:32.855626 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:36:32.855634 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:36:32.855642 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:36:32.855650 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:36:32.855658 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:36:32.855666 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:36:32.855676 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:36:32.855684 kernel: NET: Registered PF_XDP protocol family Sep 5 00:36:32.855810 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:36:32.855973 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:36:32.856089 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:36:32.856210 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:36:32.856323 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:36:32.856464 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 00:36:32.856482 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:36:32.856490 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:36:32.856499 kernel: Initialise system trusted keyrings Sep 5 00:36:32.856507 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:36:32.856515 kernel: Key type asymmetric registered Sep 5 00:36:32.856523 kernel: Asymmetric key parser 'x509' registered Sep 5 00:36:32.856531 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:36:32.856539 kernel: io scheduler mq-deadline registered Sep 5 00:36:32.856548 kernel: io scheduler kyber registered Sep 5 00:36:32.856558 kernel: io scheduler bfq registered Sep 5 00:36:32.856566 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:36:32.856575 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:36:32.856583 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:36:32.856591 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:36:32.856599 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:36:32.856607 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:36:32.856615 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:36:32.856623 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:36:32.856634 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:36:32.856773 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:36:32.856785 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:36:32.856917 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:36:32.857037 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:36:32 UTC (1757032592) Sep 5 00:36:32.857156 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:36:32.857166 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:36:32.857174 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:36:32.857185 kernel: Segment Routing with IPv6 Sep 5 00:36:32.857194 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:36:32.857202 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:36:32.857210 kernel: Key type dns_resolver registered Sep 5 00:36:32.857218 kernel: IPI shorthand broadcast: enabled Sep 5 00:36:32.857226 kernel: sched_clock: Marking stable (3409002535, 132078426)->(3565823732, -24742771) Sep 5 00:36:32.857234 kernel: registered taskstats version 1 Sep 5 00:36:32.857242 kernel: Loading compiled-in X.509 certificates Sep 5 00:36:32.857250 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 55c9ce6358d6eed45ca94030a2308729ee6a249f' Sep 5 00:36:32.857261 kernel: Demotion targets for Node 0: null Sep 5 00:36:32.857269 kernel: Key type .fscrypt registered Sep 5 00:36:32.857277 kernel: Key type fscrypt-provisioning registered Sep 5 00:36:32.857287 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:36:32.857310 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:36:32.857330 kernel: ima: No architecture policies found Sep 5 00:36:32.857354 kernel: clk: Disabling unused clocks Sep 5 00:36:32.857365 kernel: Warning: unable to open an initial console. Sep 5 00:36:32.857377 kernel: Freeing unused kernel image (initmem) memory: 54044K Sep 5 00:36:32.857385 kernel: Write protecting the kernel read-only data: 24576k Sep 5 00:36:32.857403 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 5 00:36:32.857413 kernel: Run /init as init process Sep 5 00:36:32.857423 kernel: with arguments: Sep 5 00:36:32.857433 kernel: /init Sep 5 00:36:32.857450 kernel: with environment: Sep 5 00:36:32.857459 kernel: HOME=/ Sep 5 00:36:32.857466 kernel: TERM=linux Sep 5 00:36:32.857474 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:36:32.857491 systemd[1]: Successfully made /usr/ read-only. Sep 5 00:36:32.857514 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:36:32.857525 systemd[1]: Detected virtualization kvm. Sep 5 00:36:32.857534 systemd[1]: Detected architecture x86-64. Sep 5 00:36:32.857542 systemd[1]: Running in initrd. Sep 5 00:36:32.857553 systemd[1]: No hostname configured, using default hostname. Sep 5 00:36:32.857562 systemd[1]: Hostname set to . Sep 5 00:36:32.857570 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:36:32.857578 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:36:32.857587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:36:32.857596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:36:32.857605 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:36:32.857614 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:36:32.857625 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:36:32.857635 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:36:32.857645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:36:32.857653 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:36:32.857662 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:36:32.857671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:36:32.857679 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:36:32.857690 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:36:32.857699 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:36:32.857707 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:36:32.857716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:36:32.857725 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:36:32.857733 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:36:32.857746 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 00:36:32.857755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:36:32.857766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:36:32.857775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:36:32.857784 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:36:32.857792 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:36:32.857801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:36:32.857812 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:36:32.857823 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 00:36:32.857832 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:36:32.857841 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:36:32.857849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:36:32.857858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:32.857867 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:36:32.857879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:36:32.857908 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:36:32.857917 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:36:32.857962 systemd-journald[220]: Collecting audit messages is disabled. Sep 5 00:36:32.857992 systemd-journald[220]: Journal started Sep 5 00:36:32.858011 systemd-journald[220]: Runtime Journal (/run/log/journal/3301c0fe63fc4e2fadfeaa6f148ec3f9) is 6M, max 48.6M, 42.5M free. Sep 5 00:36:32.855883 systemd-modules-load[221]: Inserted module 'overlay' Sep 5 00:36:32.929011 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:36:32.929040 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:36:32.929057 kernel: Bridge firewalling registered Sep 5 00:36:32.888410 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 5 00:36:32.930218 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:36:32.932712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:32.935146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:36:32.941536 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:36:32.943907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:36:32.947095 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:36:32.955274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:36:32.963975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:36:32.967838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:36:32.968348 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 00:36:32.973627 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:36:32.976121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:36:32.979066 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:36:32.981105 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:36:33.013127 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:36:33.029445 systemd-resolved[261]: Positive Trust Anchors: Sep 5 00:36:33.029453 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:36:33.029482 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:36:33.032210 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 5 00:36:33.033458 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:36:33.041032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:36:33.131927 kernel: SCSI subsystem initialized Sep 5 00:36:33.140938 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:36:33.151924 kernel: iscsi: registered transport (tcp) Sep 5 00:36:33.173920 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:36:33.173952 kernel: QLogic iSCSI HBA Driver Sep 5 00:36:33.197101 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:36:33.217528 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:36:33.219984 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:36:33.280748 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:36:33.282620 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:36:33.341927 kernel: raid6: avx2x4 gen() 29134 MB/s Sep 5 00:36:33.358926 kernel: raid6: avx2x2 gen() 26812 MB/s Sep 5 00:36:33.376005 kernel: raid6: avx2x1 gen() 25038 MB/s Sep 5 00:36:33.376030 kernel: raid6: using algorithm avx2x4 gen() 29134 MB/s Sep 5 00:36:33.399166 kernel: raid6: .... xor() 7498 MB/s, rmw enabled Sep 5 00:36:33.399253 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:36:33.444952 kernel: xor: automatically using best checksumming function avx Sep 5 00:36:33.613957 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:36:33.623540 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:36:33.625913 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:36:33.664696 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 5 00:36:33.671863 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:36:33.692797 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:36:33.727362 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 5 00:36:33.760765 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:36:33.762743 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:36:33.886820 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:36:33.890295 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:36:33.933929 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:36:33.935925 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:36:33.941921 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:36:33.951398 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:36:33.951442 kernel: GPT:9289727 != 19775487 Sep 5 00:36:33.951457 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:36:33.951471 kernel: GPT:9289727 != 19775487 Sep 5 00:36:33.951484 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:36:33.951498 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 5 00:36:33.951512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:33.951526 kernel: AES CTR mode by8 optimization enabled Sep 5 00:36:33.982077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:36:33.987279 kernel: libata version 3.00 loaded. Sep 5 00:36:33.982302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:33.987282 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:33.992173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:33.996262 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:36:34.017320 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:36:34.017608 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:36:34.020923 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 00:36:34.021105 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 00:36:34.022176 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:36:34.024139 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:36:34.025910 kernel: scsi host0: ahci Sep 5 00:36:34.028391 kernel: scsi host1: ahci Sep 5 00:36:34.028588 kernel: scsi host2: ahci Sep 5 00:36:34.029903 kernel: scsi host3: ahci Sep 5 00:36:34.031230 kernel: scsi host4: ahci Sep 5 00:36:34.031412 kernel: scsi host5: ahci Sep 5 00:36:34.033158 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 5 00:36:34.033186 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 5 00:36:34.034174 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 5 00:36:34.034198 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 5 00:36:34.037077 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 5 00:36:34.037100 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 5 00:36:34.047327 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:36:34.078513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:36:34.078827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:34.106443 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:36:34.108428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:36:34.111646 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:36:34.342660 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:34.342799 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:36:34.342818 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:34.344501 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:34.344639 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:34.345945 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:34.346944 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:36:34.346978 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:36:34.347323 kernel: ata3.00: applying bridge limits Sep 5 00:36:34.347927 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:36:34.348978 kernel: ata3.00: configured for UDMA/100 Sep 5 00:36:34.349931 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:36:34.420253 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:36:34.420674 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:36:34.440924 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:36:34.550563 disk-uuid[632]: Primary Header is updated. Sep 5 00:36:34.550563 disk-uuid[632]: Secondary Entries is updated. Sep 5 00:36:34.550563 disk-uuid[632]: Secondary Header is updated. Sep 5 00:36:34.559956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:34.564926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:34.875048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:36:34.876787 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:36:34.878400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:36:34.879550 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:36:34.880604 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:36:34.921142 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:36:35.566926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:35.567186 disk-uuid[634]: The operation has completed successfully. Sep 5 00:36:35.603239 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:36:35.603444 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:36:35.644985 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:36:35.674864 sh[663]: Success Sep 5 00:36:35.694066 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:36:35.694145 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:36:35.695503 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 00:36:35.704906 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 5 00:36:35.738332 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:36:35.742061 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:36:35.759018 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:36:35.766735 kernel: BTRFS: device fsid bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (675) Sep 5 00:36:35.766778 kernel: BTRFS info (device dm-0): first mount of filesystem bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 Sep 5 00:36:35.766794 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:36:35.772925 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:36:35.772962 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 00:36:35.774407 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:36:35.775134 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:36:35.777329 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:36:35.778288 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:36:35.779933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:36:35.804921 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 5 00:36:35.807063 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:35.807116 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:36:35.810005 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:36:35.810065 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:36:35.816958 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:35.817489 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:36:35.819689 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:36:35.910698 ignition[749]: Ignition 2.21.0 Sep 5 00:36:35.910713 ignition[749]: Stage: fetch-offline Sep 5 00:36:35.910744 ignition[749]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:35.910754 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:35.910838 ignition[749]: parsed url from cmdline: "" Sep 5 00:36:35.910842 ignition[749]: no config URL provided Sep 5 00:36:35.910848 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:36:35.910858 ignition[749]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:36:35.910883 ignition[749]: op(1): [started] loading QEMU firmware config module Sep 5 00:36:35.910904 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:36:35.920052 ignition[749]: op(1): [finished] loading QEMU firmware config module Sep 5 00:36:35.931219 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:36:35.936521 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:36:35.963729 ignition[749]: parsing config with SHA512: 0d8d919bcb0f436d758964ffe87c9c4a10fc6410405c3f6d4ed1ddb50b4d8c7e8281c6302dbc925c23a69124b38d49e86a998ff9bf5f109b3688d293bef398be Sep 5 00:36:35.967997 unknown[749]: fetched base config from "system" Sep 5 00:36:35.968008 unknown[749]: fetched user config from "qemu" Sep 5 00:36:35.968415 ignition[749]: fetch-offline: fetch-offline passed Sep 5 00:36:35.968495 ignition[749]: Ignition finished successfully Sep 5 00:36:35.970999 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:36:35.993152 systemd-networkd[853]: lo: Link UP Sep 5 00:36:35.993164 systemd-networkd[853]: lo: Gained carrier Sep 5 00:36:35.994868 systemd-networkd[853]: Enumeration completed Sep 5 00:36:35.995077 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:36:35.995291 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:35.995296 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:36:35.996387 systemd-networkd[853]: eth0: Link UP Sep 5 00:36:35.996582 systemd-networkd[853]: eth0: Gained carrier Sep 5 00:36:35.996590 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:35.997564 systemd[1]: Reached target network.target - Network. Sep 5 00:36:35.999542 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:36:36.000561 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:36:36.022046 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:36:36.047336 ignition[857]: Ignition 2.21.0 Sep 5 00:36:36.047350 ignition[857]: Stage: kargs Sep 5 00:36:36.047501 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:36.047512 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:36.048308 ignition[857]: kargs: kargs passed Sep 5 00:36:36.048376 ignition[857]: Ignition finished successfully Sep 5 00:36:36.054172 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:36:36.057375 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:36:36.093136 ignition[866]: Ignition 2.21.0 Sep 5 00:36:36.094664 ignition[866]: Stage: disks Sep 5 00:36:36.095573 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:36.095592 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:36.099016 ignition[866]: disks: disks passed Sep 5 00:36:36.099093 ignition[866]: Ignition finished successfully Sep 5 00:36:36.103466 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:36:36.103780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:36:36.105493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:36:36.107513 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:36:36.110803 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:36:36.111125 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:36:36.116149 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:36:36.155628 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 00:36:36.164128 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:36:36.165501 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:36:36.277912 kernel: EXT4-fs (vda9): mounted filesystem a99dab41-6cdd-4037-a941-eeee48403b9e r/w with ordered data mode. Quota mode: none. Sep 5 00:36:36.278659 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:36:36.279436 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:36:36.282667 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:36:36.284451 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:36:36.286373 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:36:36.286444 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:36:36.286477 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:36:36.317555 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:36:36.319478 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:36:36.325362 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Sep 5 00:36:36.325387 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:36.325398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:36:36.328641 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:36:36.328667 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:36:36.331181 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:36:36.358527 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:36:36.364389 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:36:36.368994 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:36:36.374337 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:36:36.469468 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:36:36.470746 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:36:36.475087 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:36:36.497944 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:36.514140 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:36:36.530248 ignition[998]: INFO : Ignition 2.21.0 Sep 5 00:36:36.530248 ignition[998]: INFO : Stage: mount Sep 5 00:36:36.532106 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:36.532106 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:36.536222 ignition[998]: INFO : mount: mount passed Sep 5 00:36:36.537026 ignition[998]: INFO : Ignition finished successfully Sep 5 00:36:36.540658 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:36:36.543225 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:36:36.765824 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:36:36.767651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:36:36.801917 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Sep 5 00:36:36.803915 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:36.803936 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:36:36.807033 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:36:36.807055 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:36:36.808809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:36:36.844670 ignition[1027]: INFO : Ignition 2.21.0 Sep 5 00:36:36.846019 ignition[1027]: INFO : Stage: files Sep 5 00:36:36.846832 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:36.846832 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:36.849431 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:36:36.849431 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:36:36.849431 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:36:36.858939 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:36:36.860802 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:36:36.860802 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:36:36.859941 unknown[1027]: wrote ssh authorized keys file for user: core Sep 5 00:36:36.865552 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:36:36.865552 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 5 00:36:36.899188 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:36:37.902826 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:36:37.902826 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:36:37.908594 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:36:37.908594 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:36:37.908594 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:36:37.908594 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:36:37.908594 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:36:37.908594 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:36:37.908594 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:36:38.055515 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:36:38.059871 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:36:38.059871 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:36:38.071156 systemd-networkd[853]: eth0: Gained IPv6LL Sep 5 00:36:38.147838 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:36:38.147838 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:36:38.152790 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 5 00:36:38.604483 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:36:39.131304 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:36:39.131304 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:36:39.140814 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:36:39.303198 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:36:39.303198 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:36:39.303198 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:36:39.309834 ignition[1027]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:36:39.309834 ignition[1027]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:36:39.309834 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:36:39.309834 ignition[1027]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:36:39.347656 ignition[1027]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:36:39.352682 ignition[1027]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:36:39.354383 ignition[1027]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:36:39.354383 ignition[1027]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:36:39.354383 ignition[1027]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:36:39.354383 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:36:39.354383 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:36:39.354383 ignition[1027]: INFO : files: files passed Sep 5 00:36:39.354383 ignition[1027]: INFO : Ignition finished successfully Sep 5 00:36:39.360692 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:36:39.364158 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:36:39.372860 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:36:39.389913 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:36:39.390059 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:36:39.394352 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:36:39.396928 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:36:39.398618 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:36:39.401364 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:36:39.404419 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:36:39.405123 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:36:39.443359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:36:39.494936 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:36:39.495086 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:36:39.497912 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:36:39.500374 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:36:39.500694 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:36:39.505653 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:36:39.542878 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:36:39.547467 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:36:39.575232 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:36:39.575461 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:36:39.579464 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:36:39.581809 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:36:39.582019 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:36:39.585162 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:36:39.586570 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:36:39.586963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:36:39.587533 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:36:39.587928 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:36:39.588503 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:36:39.588897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:36:39.589521 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:36:39.589905 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:36:39.590463 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:36:39.590837 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:36:39.591435 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:36:39.591569 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:36:39.610676 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:36:39.625816 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:36:39.626813 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:36:39.629818 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:36:39.632404 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:36:39.632602 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:36:39.635394 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:36:39.635584 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:36:39.638826 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:36:39.639786 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:36:39.644044 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:36:39.645654 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:36:39.648366 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:36:39.650534 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:36:39.650652 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:36:39.651643 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:36:39.651733 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:36:39.653594 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:36:39.653758 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:36:39.655737 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:36:39.655849 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:36:39.662907 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:36:39.664183 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:36:39.664364 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:36:39.695045 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:36:39.709129 ignition[1082]: INFO : Ignition 2.21.0 Sep 5 00:36:39.709129 ignition[1082]: INFO : Stage: umount Sep 5 00:36:39.719754 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:39.719754 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:39.719754 ignition[1082]: INFO : umount: umount passed Sep 5 00:36:39.719754 ignition[1082]: INFO : Ignition finished successfully Sep 5 00:36:39.718972 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:36:39.719291 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:36:39.721570 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:36:39.721696 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:36:39.758647 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:36:39.758785 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:36:39.764971 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:36:39.765097 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:36:39.770374 systemd[1]: Stopped target network.target - Network. Sep 5 00:36:39.773739 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:36:39.773829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:36:39.776260 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:36:39.776357 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:36:39.778180 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:36:39.778262 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:36:39.780078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:36:39.780135 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:36:39.782146 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:36:39.784023 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:36:39.794271 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:36:39.794434 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:36:39.799103 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 00:36:39.799356 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:36:39.799509 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:36:39.803543 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 00:36:39.804251 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 00:36:39.804878 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:36:39.804946 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:36:39.806299 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:36:39.808799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:36:39.808853 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:36:39.810713 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:36:39.810769 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:36:39.814625 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:36:39.814684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:36:39.826468 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:36:39.826542 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:36:39.830407 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:36:39.833025 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:36:39.833092 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:36:39.851918 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:36:39.853072 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:36:39.854837 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:36:39.854918 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:36:39.858593 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:36:39.858644 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:36:39.859232 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:36:39.859298 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:36:39.865855 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:36:39.865932 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:36:39.869540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:36:39.869622 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:36:39.876849 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:36:39.879072 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 00:36:39.879149 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:36:39.882979 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:36:39.883086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:36:39.903750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:36:39.903886 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:39.908703 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:36:39.908813 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 5 00:36:39.908866 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 5 00:36:39.908937 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:36:39.909607 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:36:39.909753 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:36:39.912265 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:36:39.912377 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:36:39.914494 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:36:39.914607 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:36:39.918554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:36:39.919652 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:36:39.919756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:36:39.924662 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:36:39.964611 systemd[1]: Switching root. Sep 5 00:36:40.008400 systemd-journald[220]: Journal stopped Sep 5 00:36:41.666321 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 5 00:36:41.666418 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:36:41.666444 kernel: SELinux: policy capability open_perms=1 Sep 5 00:36:41.666473 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:36:41.666490 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:36:41.666504 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:36:41.666523 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:36:41.666540 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:36:41.666561 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:36:41.666574 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 00:36:41.666590 kernel: audit: type=1403 audit(1757032600.511:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:36:41.666614 systemd[1]: Successfully loaded SELinux policy in 64.227ms. Sep 5 00:36:41.666654 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.516ms. Sep 5 00:36:41.666685 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:36:41.666703 systemd[1]: Detected virtualization kvm. Sep 5 00:36:41.666720 systemd[1]: Detected architecture x86-64. Sep 5 00:36:41.666738 systemd[1]: Detected first boot. Sep 5 00:36:41.666762 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:36:41.666782 zram_generator::config[1127]: No configuration found. Sep 5 00:36:41.666806 kernel: Guest personality initialized and is inactive Sep 5 00:36:41.666823 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 00:36:41.666857 kernel: Initialized host personality Sep 5 00:36:41.666873 kernel: NET: Registered PF_VSOCK protocol family Sep 5 00:36:41.666906 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:36:41.666929 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 00:36:41.666947 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:36:41.666965 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:36:41.666985 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:36:41.667001 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:36:41.667030 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:36:41.667046 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:36:41.667063 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:36:41.667080 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:36:41.667096 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:36:41.667117 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:36:41.667133 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:36:41.667159 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:36:41.667181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:36:41.667210 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:36:41.667227 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:36:41.667251 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:36:41.667269 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:36:41.667287 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:36:41.667304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:36:41.667319 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:36:41.667336 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:36:41.667392 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:36:41.667410 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:36:41.667428 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:36:41.667443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:36:41.667459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:36:41.667475 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:36:41.667505 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:36:41.667522 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:36:41.667537 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:36:41.667681 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 00:36:41.667701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:36:41.667718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:36:41.667735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:36:41.667752 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:36:41.667767 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:36:41.667784 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:36:41.667800 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:36:41.667816 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:36:41.667845 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:36:41.667862 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:36:41.667877 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:36:41.667910 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:36:41.667933 systemd[1]: Reached target machines.target - Containers. Sep 5 00:36:41.667954 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:36:41.667971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:36:41.667987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:36:41.668019 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:36:41.668038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:36:41.668060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:36:41.668077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:36:41.668092 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:36:41.668107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:36:41.668123 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:36:41.668149 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:36:41.668183 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:36:41.668201 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:36:41.668217 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:36:41.668234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:36:41.668251 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:36:41.668268 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:36:41.668284 kernel: loop: module loaded Sep 5 00:36:41.668301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:36:41.668318 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:36:41.668346 kernel: fuse: init (API version 7.41) Sep 5 00:36:41.668366 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 00:36:41.668383 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:36:41.668399 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:36:41.668417 systemd[1]: Stopped verity-setup.service. Sep 5 00:36:41.668450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:36:41.668513 systemd-journald[1198]: Collecting audit messages is disabled. Sep 5 00:36:41.668545 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:36:41.668561 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:36:41.668577 systemd-journald[1198]: Journal started Sep 5 00:36:41.668620 systemd-journald[1198]: Runtime Journal (/run/log/journal/3301c0fe63fc4e2fadfeaa6f148ec3f9) is 6M, max 48.6M, 42.5M free. Sep 5 00:36:41.682626 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:36:41.682695 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:36:41.682712 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:36:41.682825 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:36:41.682847 kernel: ACPI: bus type drm_connector registered Sep 5 00:36:41.682862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:36:41.377592 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:36:41.404286 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:36:41.404968 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:36:41.686940 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:36:41.689418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:36:41.691201 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:36:41.691549 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:36:41.693219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:36:41.693479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:36:41.695178 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:36:41.695504 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:36:41.697228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:36:41.697718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:36:41.699381 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:36:41.699661 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:36:41.701433 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:36:41.701696 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:36:41.703509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:36:41.705434 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:36:41.707374 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:36:41.709429 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 00:36:41.728848 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:36:41.731868 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:36:41.734264 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:36:41.735515 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:36:41.735550 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:36:41.737729 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 00:36:41.748947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:36:41.751154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:36:41.753555 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:36:41.755994 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:36:41.757428 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:36:41.758525 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:36:41.759738 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:36:41.761196 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:36:41.765994 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:36:41.769178 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:36:41.773078 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:36:41.775098 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:36:41.777367 systemd-journald[1198]: Time spent on flushing to /var/log/journal/3301c0fe63fc4e2fadfeaa6f148ec3f9 is 21.413ms for 985 entries. Sep 5 00:36:41.777367 systemd-journald[1198]: System Journal (/var/log/journal/3301c0fe63fc4e2fadfeaa6f148ec3f9) is 8M, max 195.6M, 187.6M free. Sep 5 00:36:41.840385 systemd-journald[1198]: Received client request to flush runtime journal. Sep 5 00:36:41.840478 kernel: loop0: detected capacity change from 0 to 128016 Sep 5 00:36:41.802690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:36:41.805389 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:36:41.807870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:36:41.812871 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 00:36:41.847926 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:36:41.829529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:36:41.842742 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:36:41.848148 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:36:41.852423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:36:41.872922 kernel: loop1: detected capacity change from 0 to 221472 Sep 5 00:36:41.870398 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 00:36:41.885946 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 5 00:36:41.886416 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 5 00:36:41.891836 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:36:41.904946 kernel: loop2: detected capacity change from 0 to 111000 Sep 5 00:36:41.973960 kernel: loop3: detected capacity change from 0 to 128016 Sep 5 00:36:41.989922 kernel: loop4: detected capacity change from 0 to 221472 Sep 5 00:36:41.999927 kernel: loop5: detected capacity change from 0 to 111000 Sep 5 00:36:42.013785 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:36:42.015726 (sd-merge)[1269]: Merged extensions into '/usr'. Sep 5 00:36:42.020930 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:36:42.021095 systemd[1]: Reloading... Sep 5 00:36:42.180314 zram_generator::config[1296]: No configuration found. Sep 5 00:36:42.414274 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:36:42.414762 systemd[1]: Reloading finished in 393 ms. Sep 5 00:36:42.425212 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:36:42.441208 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:36:42.443062 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:36:42.512834 systemd[1]: Starting ensure-sysext.service... Sep 5 00:36:42.514958 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:36:42.529367 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:36:42.529387 systemd[1]: Reloading... Sep 5 00:36:42.534846 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 00:36:42.535511 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 00:36:42.535846 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:36:42.536381 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:36:42.537469 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:36:42.537827 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 5 00:36:42.537991 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 5 00:36:42.542813 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:36:42.542999 systemd-tmpfiles[1333]: Skipping /boot Sep 5 00:36:42.554950 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:36:42.555123 systemd-tmpfiles[1333]: Skipping /boot Sep 5 00:36:42.609925 zram_generator::config[1364]: No configuration found. Sep 5 00:36:42.835416 systemd[1]: Reloading finished in 305 ms. Sep 5 00:36:42.856204 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:36:42.879691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:36:42.889299 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:36:42.892256 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:36:42.917621 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:36:42.921050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:36:42.926266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:36:42.929974 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:36:42.934785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:36:42.935045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:36:42.944256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:36:42.948201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:36:42.950707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:36:42.951838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:36:42.951949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:36:42.955222 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:36:42.956506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:36:42.959957 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:36:42.962685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:36:42.963134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:36:42.966275 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:36:42.966928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:36:42.974459 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:36:42.974955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:36:42.986593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:36:42.987397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:36:42.989958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:36:42.993493 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:36:42.997525 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:36:42.999385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:36:42.999570 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:36:43.005262 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:36:43.006433 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:36:43.009155 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:36:43.011227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:36:43.011498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:36:43.013560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:36:43.014018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:36:43.017499 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:36:43.017921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:36:43.030918 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:36:43.031811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:36:43.036009 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:36:43.036374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:36:43.038156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:36:43.040645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:36:43.044015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:36:43.053757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:36:43.055026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:36:43.055165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:36:43.055310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:36:43.056882 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:36:43.062154 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:36:43.063661 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:36:43.066365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:36:43.066628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:36:43.068545 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:36:43.069336 augenrules[1448]: No rules Sep 5 00:36:43.076941 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Sep 5 00:36:43.077322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:36:43.079402 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:36:43.079736 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:36:43.081572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:36:43.081795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:36:43.083593 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:36:43.083824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:36:43.089612 systemd[1]: Finished ensure-sysext.service. Sep 5 00:36:43.099944 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:36:43.100077 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:36:43.103008 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:36:43.104382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:36:43.107189 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:36:43.112018 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:36:43.199555 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:36:43.217424 systemd-resolved[1402]: Positive Trust Anchors: Sep 5 00:36:43.217442 systemd-resolved[1402]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:36:43.217474 systemd-resolved[1402]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:36:43.222752 systemd-resolved[1402]: Defaulting to hostname 'linux'. Sep 5 00:36:43.224686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:36:43.225997 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:36:43.276673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:36:43.279191 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:36:43.306959 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:36:43.307930 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:36:43.314916 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:36:43.323926 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:36:43.329015 systemd-networkd[1469]: lo: Link UP Sep 5 00:36:43.329029 systemd-networkd[1469]: lo: Gained carrier Sep 5 00:36:43.329108 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:36:43.330501 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:36:43.331728 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:36:43.333103 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:36:43.334365 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 00:36:43.336961 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:36:43.338249 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:36:43.338300 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:36:43.340808 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:36:43.341147 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:36:43.340711 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:36:43.341974 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:36:43.343167 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:36:43.344420 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:36:43.346073 systemd-networkd[1469]: Enumeration completed Sep 5 00:36:43.346329 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:36:43.349477 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:36:43.350118 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:43.350129 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:36:43.351644 systemd-networkd[1469]: eth0: Link UP Sep 5 00:36:43.351831 systemd-networkd[1469]: eth0: Gained carrier Sep 5 00:36:43.351846 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:43.360912 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 00:36:43.362451 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 00:36:43.363694 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 00:36:43.372981 systemd-networkd[1469]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:36:43.373846 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Sep 5 00:36:43.376539 systemd-timesyncd[1463]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:36:43.376597 systemd-timesyncd[1463]: Initial clock synchronization to Fri 2025-09-05 00:36:43.620055 UTC. Sep 5 00:36:43.377735 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:36:43.379317 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 00:36:43.381254 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:36:43.382668 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:36:43.388964 systemd[1]: Reached target network.target - Network. Sep 5 00:36:43.389915 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:36:43.390868 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:36:43.391867 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:36:43.391908 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:36:43.393048 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:36:43.397123 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:36:43.504209 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:36:43.507708 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:36:43.515484 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:36:43.516703 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:36:43.519665 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 00:36:43.520823 jq[1527]: false Sep 5 00:36:43.522417 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:36:43.534701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:36:43.540917 kernel: kvm_amd: TSC scaling supported Sep 5 00:36:43.540981 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:36:43.540999 kernel: kvm_amd: Nested Paging enabled Sep 5 00:36:43.541012 kernel: kvm_amd: LBR virtualization supported Sep 5 00:36:43.540477 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:36:43.544129 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:36:43.544173 kernel: kvm_amd: Virtual GIF supported Sep 5 00:36:43.544213 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing passwd entry cache Sep 5 00:36:43.544158 oslogin_cache_refresh[1529]: Refreshing passwd entry cache Sep 5 00:36:43.546196 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:36:43.555597 oslogin_cache_refresh[1529]: Failure getting users, quitting Sep 5 00:36:43.557842 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting users, quitting Sep 5 00:36:43.557842 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:36:43.557842 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing group entry cache Sep 5 00:36:43.556490 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:36:43.555639 oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:36:43.555770 oslogin_cache_refresh[1529]: Refreshing group entry cache Sep 5 00:36:43.558759 extend-filesystems[1528]: Found /dev/vda6 Sep 5 00:36:43.561987 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 00:36:43.562173 oslogin_cache_refresh[1529]: Failure getting groups, quitting Sep 5 00:36:43.564378 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting groups, quitting Sep 5 00:36:43.564378 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:36:43.562193 oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:36:43.567958 extend-filesystems[1528]: Found /dev/vda9 Sep 5 00:36:43.570221 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:36:43.574034 extend-filesystems[1528]: Checking size of /dev/vda9 Sep 5 00:36:43.572851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:36:43.573772 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:36:43.576917 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:36:43.580301 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:36:43.586921 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:36:43.596291 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:36:43.599353 extend-filesystems[1528]: Resized partition /dev/vda9 Sep 5 00:36:43.603599 extend-filesystems[1557]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 00:36:43.605549 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:36:43.606209 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 00:36:43.606547 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 00:36:43.608308 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:36:43.608602 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:36:43.614202 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:36:43.614244 jq[1550]: true Sep 5 00:36:43.613747 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:36:43.614831 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:36:43.635031 update_engine[1549]: I20250905 00:36:43.634842 1549 main.cc:92] Flatcar Update Engine starting Sep 5 00:36:43.644750 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:36:43.651949 jq[1559]: true Sep 5 00:36:43.668285 tar[1558]: linux-amd64/helm Sep 5 00:36:43.679256 systemd-logind[1537]: Watching system buttons on /dev/input/event2 (Power Button) Sep 5 00:36:43.679297 systemd-logind[1537]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:36:43.682849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:43.687660 systemd-logind[1537]: New seat seat0. Sep 5 00:36:43.693918 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:36:43.701684 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 00:36:43.704460 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:36:43.719795 dbus-daemon[1517]: [system] SELinux support is enabled Sep 5 00:36:43.723919 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:36:43.720054 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:36:43.724445 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:36:43.724445 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:36:43.724445 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:36:43.729489 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Sep 5 00:36:43.731327 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:36:43.731665 update_engine[1549]: I20250905 00:36:43.730842 1549 update_check_scheduler.cc:74] Next update check in 6m11s Sep 5 00:36:43.731701 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:36:43.783860 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:36:43.838459 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:36:43.892722 containerd[1560]: time="2025-09-05T00:36:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 00:36:43.893814 containerd[1560]: time="2025-09-05T00:36:43.893762091Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 00:36:43.905576 containerd[1560]: time="2025-09-05T00:36:43.905320618Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.069µs" Sep 5 00:36:43.905576 containerd[1560]: time="2025-09-05T00:36:43.905356475Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 00:36:43.905576 containerd[1560]: time="2025-09-05T00:36:43.905376913Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 00:36:43.908184 containerd[1560]: time="2025-09-05T00:36:43.908149593Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 00:36:43.908262 containerd[1560]: time="2025-09-05T00:36:43.908189849Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 00:36:43.908262 containerd[1560]: time="2025-09-05T00:36:43.908225515Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:36:43.908352 containerd[1560]: time="2025-09-05T00:36:43.908317819Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:36:43.908352 containerd[1560]: time="2025-09-05T00:36:43.908348586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:36:43.908965 containerd[1560]: time="2025-09-05T00:36:43.908930387Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:36:43.908965 containerd[1560]: time="2025-09-05T00:36:43.908960053Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:36:43.909092 containerd[1560]: time="2025-09-05T00:36:43.908975692Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:36:43.909092 containerd[1560]: time="2025-09-05T00:36:43.908986503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 00:36:43.909279 containerd[1560]: time="2025-09-05T00:36:43.909144739Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 00:36:43.909493 containerd[1560]: time="2025-09-05T00:36:43.909450603Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:36:43.909552 containerd[1560]: time="2025-09-05T00:36:43.909493724Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:36:43.909552 containerd[1560]: time="2025-09-05T00:36:43.909507780Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 00:36:43.909552 containerd[1560]: time="2025-09-05T00:36:43.909548016Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 00:36:43.909808 containerd[1560]: time="2025-09-05T00:36:43.909787024Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 00:36:43.909922 containerd[1560]: time="2025-09-05T00:36:43.909881160Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:36:43.915999 containerd[1560]: time="2025-09-05T00:36:43.915952376Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 00:36:43.916116 containerd[1560]: time="2025-09-05T00:36:43.916069105Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916186836Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916206703Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916218615Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916228394Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916239264Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916251637Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916263550Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916273558Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916282455Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916294488Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916408241Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916425934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916475457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 00:36:43.916915 containerd[1560]: time="2025-09-05T00:36:43.916487760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916497328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916513488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916532795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916543775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916554776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916565025Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916574884Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916629436Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916640917Z" level=info msg="Start snapshots syncer" Sep 5 00:36:43.917196 containerd[1560]: time="2025-09-05T00:36:43.916662859Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 00:36:43.917409 containerd[1560]: time="2025-09-05T00:36:43.916861020Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 00:36:43.917625 containerd[1560]: time="2025-09-05T00:36:43.917605677Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 00:36:43.917787 containerd[1560]: time="2025-09-05T00:36:43.917736151Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 00:36:43.917989 containerd[1560]: time="2025-09-05T00:36:43.917970401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 00:36:43.918063 containerd[1560]: time="2025-09-05T00:36:43.918041253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 00:36:43.918136 containerd[1560]: time="2025-09-05T00:36:43.918121945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 00:36:43.918185 containerd[1560]: time="2025-09-05T00:36:43.918174323Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 00:36:43.918260 containerd[1560]: time="2025-09-05T00:36:43.918245346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 00:36:43.918321 containerd[1560]: time="2025-09-05T00:36:43.918306822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 00:36:43.918379 containerd[1560]: time="2025-09-05T00:36:43.918365441Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 00:36:43.918449 containerd[1560]: time="2025-09-05T00:36:43.918434962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 00:36:43.918530 containerd[1560]: time="2025-09-05T00:36:43.918516855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 00:36:43.918582 containerd[1560]: time="2025-09-05T00:36:43.918570676Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 00:36:43.918657 containerd[1560]: time="2025-09-05T00:36:43.918644264Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:36:43.918776 containerd[1560]: time="2025-09-05T00:36:43.918758388Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:36:43.918826 containerd[1560]: time="2025-09-05T00:36:43.918814443Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:36:43.918875 containerd[1560]: time="2025-09-05T00:36:43.918863115Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:36:43.918941 containerd[1560]: time="2025-09-05T00:36:43.918927796Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 00:36:43.918991 containerd[1560]: time="2025-09-05T00:36:43.918979373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 00:36:43.919039 containerd[1560]: time="2025-09-05T00:36:43.919028104Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 00:36:43.919124 containerd[1560]: time="2025-09-05T00:36:43.919108505Z" level=info msg="runtime interface created" Sep 5 00:36:43.919906 containerd[1560]: time="2025-09-05T00:36:43.919159310Z" level=info msg="created NRI interface" Sep 5 00:36:43.919906 containerd[1560]: time="2025-09-05T00:36:43.919177314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 00:36:43.919906 containerd[1560]: time="2025-09-05T00:36:43.919189787Z" level=info msg="Connect containerd service" Sep 5 00:36:43.919906 containerd[1560]: time="2025-09-05T00:36:43.919211448Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:36:43.920198 containerd[1560]: time="2025-09-05T00:36:43.920178071Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:36:43.940605 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:36:43.942282 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:36:43.944442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:43.957930 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 00:36:43.965978 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:36:43.971159 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:36:43.972659 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:36:43.972851 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:36:43.972985 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:36:43.974355 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:36:43.974478 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:36:43.977416 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:36:44.006784 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:36:44.007206 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:36:44.011890 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:36:44.032776 containerd[1560]: time="2025-09-05T00:36:44.032717541Z" level=info msg="Start subscribing containerd event" Sep 5 00:36:44.032952 containerd[1560]: time="2025-09-05T00:36:44.032782034Z" level=info msg="Start recovering state" Sep 5 00:36:44.033960 containerd[1560]: time="2025-09-05T00:36:44.033307746Z" level=info msg="Start event monitor" Sep 5 00:36:44.033960 containerd[1560]: time="2025-09-05T00:36:44.033399257Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:36:44.033960 containerd[1560]: time="2025-09-05T00:36:44.033513243Z" level=info msg="Start streaming server" Sep 5 00:36:44.033960 containerd[1560]: time="2025-09-05T00:36:44.033530019Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 00:36:44.033960 containerd[1560]: time="2025-09-05T00:36:44.033539899Z" level=info msg="runtime interface starting up..." Sep 5 00:36:44.033960 containerd[1560]: time="2025-09-05T00:36:44.033547157Z" level=info msg="starting plugins..." Sep 5 00:36:44.033960 containerd[1560]: time="2025-09-05T00:36:44.033913292Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:36:44.034267 containerd[1560]: time="2025-09-05T00:36:44.033989307Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:36:44.036261 containerd[1560]: time="2025-09-05T00:36:44.035066808Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 00:36:44.037167 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:36:44.038757 containerd[1560]: time="2025-09-05T00:36:44.037351553Z" level=info msg="containerd successfully booted in 0.145228s" Sep 5 00:36:44.039130 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:36:44.042538 tar[1558]: linux-amd64/LICENSE Sep 5 00:36:44.042639 tar[1558]: linux-amd64/README.md Sep 5 00:36:44.049253 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:36:44.067650 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:36:44.070505 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:36:44.072232 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:36:44.074809 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:36:44.985035 systemd-networkd[1469]: eth0: Gained IPv6LL Sep 5 00:36:44.989631 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:36:44.993696 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:36:44.997616 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:36:45.000646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:45.003528 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:36:45.049365 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:36:45.052063 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:36:45.052460 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:36:45.055609 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:36:45.812275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:45.814366 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:36:45.816860 systemd[1]: Startup finished in 3.491s (kernel) + 7.855s (initrd) + 5.367s (userspace) = 16.714s. Sep 5 00:36:45.845369 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:36:46.336747 kubelet[1670]: E0905 00:36:46.336594 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:36:46.341859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:36:46.342143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:36:46.342636 systemd[1]: kubelet.service: Consumed 1.068s CPU time, 264.9M memory peak. Sep 5 00:36:46.573742 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:36:46.575024 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:48012.service - OpenSSH per-connection server daemon (10.0.0.1:48012). Sep 5 00:36:46.631716 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 48012 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:36:46.633644 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:46.641500 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:36:46.642720 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:36:46.650557 systemd-logind[1537]: New session 1 of user core. Sep 5 00:36:46.669710 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:36:46.674036 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:36:46.701665 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:36:46.704746 systemd-logind[1537]: New session c1 of user core. Sep 5 00:36:46.868219 systemd[1688]: Queued start job for default target default.target. Sep 5 00:36:46.889412 systemd[1688]: Created slice app.slice - User Application Slice. Sep 5 00:36:46.889448 systemd[1688]: Reached target paths.target - Paths. Sep 5 00:36:46.889504 systemd[1688]: Reached target timers.target - Timers. Sep 5 00:36:46.891200 systemd[1688]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:36:46.904643 systemd[1688]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:36:46.904830 systemd[1688]: Reached target sockets.target - Sockets. Sep 5 00:36:46.904896 systemd[1688]: Reached target basic.target - Basic System. Sep 5 00:36:46.904997 systemd[1688]: Reached target default.target - Main User Target. Sep 5 00:36:46.905047 systemd[1688]: Startup finished in 192ms. Sep 5 00:36:46.905110 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:36:46.906630 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:36:46.973348 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:48020.service - OpenSSH per-connection server daemon (10.0.0.1:48020). Sep 5 00:36:47.028152 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 48020 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:36:47.030465 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:47.037250 systemd-logind[1537]: New session 2 of user core. Sep 5 00:36:47.048409 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:36:47.106564 sshd[1702]: Connection closed by 10.0.0.1 port 48020 Sep 5 00:36:47.107120 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:47.122143 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:48020.service: Deactivated successfully. Sep 5 00:36:47.124508 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:36:47.125555 systemd-logind[1537]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:36:47.131878 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:48034.service - OpenSSH per-connection server daemon (10.0.0.1:48034). Sep 5 00:36:47.132849 systemd-logind[1537]: Removed session 2. Sep 5 00:36:48.493860 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 48034 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:36:48.495943 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:48.501589 systemd-logind[1537]: New session 3 of user core. Sep 5 00:36:48.516956 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:36:48.573645 sshd[1711]: Connection closed by 10.0.0.1 port 48034 Sep 5 00:36:48.574044 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:48.594377 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:48034.service: Deactivated successfully. Sep 5 00:36:48.596573 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:36:48.597399 systemd-logind[1537]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:36:48.601381 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:48070.service - OpenSSH per-connection server daemon (10.0.0.1:48070). Sep 5 00:36:48.602885 systemd-logind[1537]: Removed session 3. Sep 5 00:36:48.664946 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 48070 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:36:48.666863 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:48.672402 systemd-logind[1537]: New session 4 of user core. Sep 5 00:36:48.684078 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:36:48.742709 sshd[1720]: Connection closed by 10.0.0.1 port 48070 Sep 5 00:36:48.743378 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:48.759256 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:48070.service: Deactivated successfully. Sep 5 00:36:48.761606 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:36:48.762503 systemd-logind[1537]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:36:48.767067 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:48076.service - OpenSSH per-connection server daemon (10.0.0.1:48076). Sep 5 00:36:48.767888 systemd-logind[1537]: Removed session 4. Sep 5 00:36:48.821151 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 48076 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:36:48.823011 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:48.828352 systemd-logind[1537]: New session 5 of user core. Sep 5 00:36:48.843126 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:36:48.904798 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:36:48.905197 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:36:48.928664 sudo[1731]: pam_unix(sudo:session): session closed for user root Sep 5 00:36:48.930528 sshd[1730]: Connection closed by 10.0.0.1 port 48076 Sep 5 00:36:48.930982 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:48.944567 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:48076.service: Deactivated successfully. Sep 5 00:36:48.946733 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:36:48.947691 systemd-logind[1537]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:36:48.951110 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:48086.service - OpenSSH per-connection server daemon (10.0.0.1:48086). Sep 5 00:36:48.951707 systemd-logind[1537]: Removed session 5. Sep 5 00:36:49.009648 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 48086 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:36:49.011262 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:49.016087 systemd-logind[1537]: New session 6 of user core. Sep 5 00:36:49.026362 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:36:49.083617 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:36:49.083999 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:36:49.195169 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 5 00:36:49.203992 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 00:36:49.204403 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:36:49.216407 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:36:49.278296 augenrules[1764]: No rules Sep 5 00:36:49.280170 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:36:49.280472 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:36:49.281794 sudo[1741]: pam_unix(sudo:session): session closed for user root Sep 5 00:36:49.283554 sshd[1740]: Connection closed by 10.0.0.1 port 48086 Sep 5 00:36:49.284011 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:49.292783 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:48086.service: Deactivated successfully. Sep 5 00:36:49.294897 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:36:49.295896 systemd-logind[1537]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:36:49.298731 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:48090.service - OpenSSH per-connection server daemon (10.0.0.1:48090). Sep 5 00:36:49.299429 systemd-logind[1537]: Removed session 6. Sep 5 00:36:49.350183 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 48090 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:36:49.351817 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:49.357424 systemd-logind[1537]: New session 7 of user core. Sep 5 00:36:49.371269 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:36:49.427869 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:36:49.428203 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:36:50.287483 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:36:50.305505 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:36:50.830133 dockerd[1797]: time="2025-09-05T00:36:50.829998841Z" level=info msg="Starting up" Sep 5 00:36:50.831564 dockerd[1797]: time="2025-09-05T00:36:50.831508507Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 00:36:50.904732 dockerd[1797]: time="2025-09-05T00:36:50.904638404Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 00:36:51.254457 dockerd[1797]: time="2025-09-05T00:36:51.254215219Z" level=info msg="Loading containers: start." Sep 5 00:36:51.268941 kernel: Initializing XFRM netlink socket Sep 5 00:36:51.965052 systemd-networkd[1469]: docker0: Link UP Sep 5 00:36:51.977425 dockerd[1797]: time="2025-09-05T00:36:51.977342846Z" level=info msg="Loading containers: done." Sep 5 00:36:52.057739 dockerd[1797]: time="2025-09-05T00:36:52.057654937Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:36:52.058032 dockerd[1797]: time="2025-09-05T00:36:52.057789855Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 00:36:52.058032 dockerd[1797]: time="2025-09-05T00:36:52.057944089Z" level=info msg="Initializing buildkit" Sep 5 00:36:52.102005 dockerd[1797]: time="2025-09-05T00:36:52.101922989Z" level=info msg="Completed buildkit initialization" Sep 5 00:36:52.108428 dockerd[1797]: time="2025-09-05T00:36:52.108370184Z" level=info msg="Daemon has completed initialization" Sep 5 00:36:52.108560 dockerd[1797]: time="2025-09-05T00:36:52.108488467Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:36:52.108759 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:36:53.150613 containerd[1560]: time="2025-09-05T00:36:53.150550415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 00:36:54.578743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147809153.mount: Deactivated successfully. Sep 5 00:36:55.949932 containerd[1560]: time="2025-09-05T00:36:55.949673168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:55.951082 containerd[1560]: time="2025-09-05T00:36:55.951042893Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 5 00:36:55.953069 containerd[1560]: time="2025-09-05T00:36:55.952967635Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:55.957213 containerd[1560]: time="2025-09-05T00:36:55.957144749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:55.958332 containerd[1560]: time="2025-09-05T00:36:55.958261957Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.807645577s" Sep 5 00:36:55.958396 containerd[1560]: time="2025-09-05T00:36:55.958333296Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 5 00:36:55.959234 containerd[1560]: time="2025-09-05T00:36:55.959189026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 00:36:56.592544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:36:56.594476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:56.820764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:56.825074 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:36:56.959278 kubelet[2079]: E0905 00:36:56.957852 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:36:56.965193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:36:56.965430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:36:56.965855 systemd[1]: kubelet.service: Consumed 239ms CPU time, 114.4M memory peak. Sep 5 00:36:57.628043 containerd[1560]: time="2025-09-05T00:36:57.627969839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:57.628876 containerd[1560]: time="2025-09-05T00:36:57.628818321Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 5 00:36:57.630117 containerd[1560]: time="2025-09-05T00:36:57.630078216Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:57.633135 containerd[1560]: time="2025-09-05T00:36:57.633088127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:57.633969 containerd[1560]: time="2025-09-05T00:36:57.633924673Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.674670625s" Sep 5 00:36:57.633969 containerd[1560]: time="2025-09-05T00:36:57.633955828Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 5 00:36:57.634798 containerd[1560]: time="2025-09-05T00:36:57.634511913Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 00:36:59.178955 containerd[1560]: time="2025-09-05T00:36:59.178867983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:59.179971 containerd[1560]: time="2025-09-05T00:36:59.179906613Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 5 00:36:59.181103 containerd[1560]: time="2025-09-05T00:36:59.181066013Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:59.183611 containerd[1560]: time="2025-09-05T00:36:59.183576052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:59.184782 containerd[1560]: time="2025-09-05T00:36:59.184729857Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.550180463s" Sep 5 00:36:59.184782 containerd[1560]: time="2025-09-05T00:36:59.184764624Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 5 00:36:59.185379 containerd[1560]: time="2025-09-05T00:36:59.185325407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 00:37:00.176579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322090950.mount: Deactivated successfully. Sep 5 00:37:00.862208 containerd[1560]: time="2025-09-05T00:37:00.862115363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:00.862978 containerd[1560]: time="2025-09-05T00:37:00.862944503Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 5 00:37:00.864237 containerd[1560]: time="2025-09-05T00:37:00.864200369Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:00.866062 containerd[1560]: time="2025-09-05T00:37:00.866014568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:00.866588 containerd[1560]: time="2025-09-05T00:37:00.866529746Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 1.68115702s" Sep 5 00:37:00.866588 containerd[1560]: time="2025-09-05T00:37:00.866574239Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 5 00:37:00.867138 containerd[1560]: time="2025-09-05T00:37:00.867112663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:37:01.484656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647453718.mount: Deactivated successfully. Sep 5 00:37:02.570603 containerd[1560]: time="2025-09-05T00:37:02.570507636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:02.571809 containerd[1560]: time="2025-09-05T00:37:02.571766998Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 00:37:02.574529 containerd[1560]: time="2025-09-05T00:37:02.574476975Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:02.577784 containerd[1560]: time="2025-09-05T00:37:02.577719289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:02.579056 containerd[1560]: time="2025-09-05T00:37:02.579024352Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.711882186s" Sep 5 00:37:02.579056 containerd[1560]: time="2025-09-05T00:37:02.579054672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 00:37:02.579708 containerd[1560]: time="2025-09-05T00:37:02.579684408Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:37:03.552825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216483208.mount: Deactivated successfully. Sep 5 00:37:03.559267 containerd[1560]: time="2025-09-05T00:37:03.559217372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:37:03.560180 containerd[1560]: time="2025-09-05T00:37:03.560121697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:37:03.561629 containerd[1560]: time="2025-09-05T00:37:03.561589650Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:37:03.564164 containerd[1560]: time="2025-09-05T00:37:03.564094334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:37:03.564879 containerd[1560]: time="2025-09-05T00:37:03.564802450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 985.085929ms" Sep 5 00:37:03.564879 containerd[1560]: time="2025-09-05T00:37:03.564854603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:37:03.566206 containerd[1560]: time="2025-09-05T00:37:03.566171701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 00:37:04.201868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484299127.mount: Deactivated successfully. Sep 5 00:37:06.818219 containerd[1560]: time="2025-09-05T00:37:06.818150077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:06.819158 containerd[1560]: time="2025-09-05T00:37:06.819133162Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 5 00:37:06.820462 containerd[1560]: time="2025-09-05T00:37:06.820424149Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:06.823474 containerd[1560]: time="2025-09-05T00:37:06.823411324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:06.824445 containerd[1560]: time="2025-09-05T00:37:06.824379215Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.258177379s" Sep 5 00:37:06.824445 containerd[1560]: time="2025-09-05T00:37:06.824418552Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 5 00:37:07.216368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:37:07.219250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:07.449301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:07.467441 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:37:07.646211 kubelet[2231]: E0905 00:37:07.646121 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:37:07.651579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:37:07.651777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:37:07.652346 systemd[1]: kubelet.service: Consumed 373ms CPU time, 111M memory peak. Sep 5 00:37:09.994768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:09.995017 systemd[1]: kubelet.service: Consumed 373ms CPU time, 111M memory peak. Sep 5 00:37:09.997703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:10.028822 systemd[1]: Reload requested from client PID 2261 ('systemctl') (unit session-7.scope)... Sep 5 00:37:10.028851 systemd[1]: Reloading... Sep 5 00:37:10.128167 zram_generator::config[2309]: No configuration found. Sep 5 00:37:10.428535 systemd[1]: Reloading finished in 399 ms. Sep 5 00:37:10.496819 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:37:10.497001 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:37:10.497453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:10.497523 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.2M memory peak. Sep 5 00:37:10.499876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:10.685942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:10.700327 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:37:10.803368 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:37:10.803368 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:37:10.803368 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:37:10.804009 kubelet[2351]: I0905 00:37:10.803419 2351 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:37:11.249603 kubelet[2351]: I0905 00:37:11.249532 2351 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:37:11.249603 kubelet[2351]: I0905 00:37:11.249581 2351 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:37:11.249976 kubelet[2351]: I0905 00:37:11.249952 2351 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:37:11.269279 kubelet[2351]: I0905 00:37:11.269220 2351 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:37:11.269482 kubelet[2351]: E0905 00:37:11.269417 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:11.277512 kubelet[2351]: I0905 00:37:11.277476 2351 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:37:11.284799 kubelet[2351]: I0905 00:37:11.284744 2351 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:37:11.285467 kubelet[2351]: I0905 00:37:11.285430 2351 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:37:11.285673 kubelet[2351]: I0905 00:37:11.285622 2351 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:37:11.285908 kubelet[2351]: I0905 00:37:11.285664 2351 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:37:11.286100 kubelet[2351]: I0905 00:37:11.285940 2351 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:37:11.286100 kubelet[2351]: I0905 00:37:11.285954 2351 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:37:11.286159 kubelet[2351]: I0905 00:37:11.286116 2351 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:37:11.288618 kubelet[2351]: I0905 00:37:11.288584 2351 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:37:11.288618 kubelet[2351]: I0905 00:37:11.288613 2351 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:37:11.288691 kubelet[2351]: I0905 00:37:11.288655 2351 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:37:11.288691 kubelet[2351]: I0905 00:37:11.288678 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:37:11.291570 kubelet[2351]: I0905 00:37:11.291511 2351 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 00:37:11.292005 kubelet[2351]: I0905 00:37:11.291975 2351 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:37:11.293284 kubelet[2351]: W0905 00:37:11.292445 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Sep 5 00:37:11.293284 kubelet[2351]: E0905 00:37:11.292501 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:11.293284 kubelet[2351]: W0905 00:37:11.292748 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:37:11.293284 kubelet[2351]: W0905 00:37:11.292851 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Sep 5 00:37:11.293284 kubelet[2351]: E0905 00:37:11.292920 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:11.295537 kubelet[2351]: I0905 00:37:11.295519 2351 server.go:1274] "Started kubelet" Sep 5 00:37:11.296655 kubelet[2351]: I0905 00:37:11.296607 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:37:11.297247 kubelet[2351]: I0905 00:37:11.297230 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:37:11.297722 kubelet[2351]: I0905 00:37:11.297682 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:37:11.298722 kubelet[2351]: I0905 00:37:11.297230 2351 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:37:11.298819 kubelet[2351]: I0905 00:37:11.298795 2351 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:37:11.298986 kubelet[2351]: I0905 00:37:11.298961 2351 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:37:11.299020 kubelet[2351]: I0905 00:37:11.299016 2351 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:37:11.299355 kubelet[2351]: W0905 00:37:11.299316 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Sep 5 00:37:11.299402 kubelet[2351]: E0905 00:37:11.299363 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:11.299549 kubelet[2351]: I0905 00:37:11.299524 2351 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:37:11.299637 kubelet[2351]: I0905 00:37:11.299609 2351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:37:11.300503 kubelet[2351]: E0905 00:37:11.300477 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:11.300567 kubelet[2351]: E0905 00:37:11.300537 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Sep 5 00:37:11.300708 kubelet[2351]: I0905 00:37:11.300688 2351 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:37:11.301849 kubelet[2351]: I0905 00:37:11.297248 2351 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:37:11.302821 kubelet[2351]: I0905 00:37:11.302781 2351 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:37:11.305376 kubelet[2351]: E0905 00:37:11.303034 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623bde77230da9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:37:11.295479209 +0000 UTC m=+0.585228646,LastTimestamp:2025-09-05 00:37:11.295479209 +0000 UTC m=+0.585228646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:37:11.307996 kubelet[2351]: E0905 00:37:11.307952 2351 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:37:11.314648 kubelet[2351]: I0905 00:37:11.314624 2351 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:37:11.314780 kubelet[2351]: I0905 00:37:11.314766 2351 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:37:11.314849 kubelet[2351]: I0905 00:37:11.314839 2351 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:37:11.318844 kubelet[2351]: I0905 00:37:11.318789 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:37:11.320557 kubelet[2351]: I0905 00:37:11.320521 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:37:11.320557 kubelet[2351]: I0905 00:37:11.320544 2351 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:37:11.320645 kubelet[2351]: I0905 00:37:11.320568 2351 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:37:11.320645 kubelet[2351]: E0905 00:37:11.320616 2351 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:37:11.401696 kubelet[2351]: E0905 00:37:11.401601 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:11.420873 kubelet[2351]: E0905 00:37:11.420758 2351 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:37:11.501794 kubelet[2351]: E0905 00:37:11.501622 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Sep 5 00:37:11.501794 kubelet[2351]: E0905 00:37:11.501686 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:11.562126 kubelet[2351]: W0905 00:37:11.562020 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Sep 5 00:37:11.562126 kubelet[2351]: E0905 00:37:11.562109 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:11.562868 kubelet[2351]: I0905 00:37:11.562613 2351 policy_none.go:49] "None policy: Start" Sep 5 00:37:11.565973 kubelet[2351]: I0905 00:37:11.565915 2351 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:37:11.565973 kubelet[2351]: I0905 00:37:11.565969 2351 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:37:11.575345 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:37:11.598763 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:37:11.601793 kubelet[2351]: E0905 00:37:11.601757 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:11.602941 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:37:11.621407 kubelet[2351]: E0905 00:37:11.621332 2351 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:37:11.626473 kubelet[2351]: I0905 00:37:11.626397 2351 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:37:11.626730 kubelet[2351]: I0905 00:37:11.626692 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:37:11.626990 kubelet[2351]: I0905 00:37:11.626710 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:37:11.627043 kubelet[2351]: I0905 00:37:11.626997 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:37:11.629336 kubelet[2351]: E0905 00:37:11.629283 2351 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:37:11.728643 kubelet[2351]: I0905 00:37:11.728586 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:37:11.729075 kubelet[2351]: E0905 00:37:11.729032 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Sep 5 00:37:11.903412 kubelet[2351]: E0905 00:37:11.903241 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Sep 5 00:37:11.930965 kubelet[2351]: I0905 00:37:11.930873 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:37:11.931476 kubelet[2351]: E0905 00:37:11.931416 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Sep 5 00:37:12.033688 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 5 00:37:12.069281 systemd[1]: Created slice kubepods-burstable-poda68134af9481152752d0e47b4616a3fb.slice - libcontainer container kubepods-burstable-poda68134af9481152752d0e47b4616a3fb.slice. Sep 5 00:37:12.073807 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 5 00:37:12.103283 kubelet[2351]: I0905 00:37:12.103201 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a68134af9481152752d0e47b4616a3fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a68134af9481152752d0e47b4616a3fb\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:12.103283 kubelet[2351]: I0905 00:37:12.103267 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a68134af9481152752d0e47b4616a3fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a68134af9481152752d0e47b4616a3fb\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:12.103519 kubelet[2351]: I0905 00:37:12.103300 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:12.103519 kubelet[2351]: I0905 00:37:12.103340 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:12.103519 kubelet[2351]: I0905 00:37:12.103390 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:12.103519 kubelet[2351]: I0905 00:37:12.103421 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a68134af9481152752d0e47b4616a3fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a68134af9481152752d0e47b4616a3fb\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:12.103519 kubelet[2351]: I0905 00:37:12.103456 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:12.103667 kubelet[2351]: I0905 00:37:12.103482 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:12.103667 kubelet[2351]: I0905 00:37:12.103502 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:12.333493 kubelet[2351]: I0905 00:37:12.333342 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:37:12.333769 kubelet[2351]: E0905 00:37:12.333728 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Sep 5 00:37:12.365176 kubelet[2351]: E0905 00:37:12.365148 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:12.365804 containerd[1560]: time="2025-09-05T00:37:12.365758991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:12.371875 kubelet[2351]: E0905 00:37:12.371852 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:12.372351 containerd[1560]: time="2025-09-05T00:37:12.372313487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a68134af9481152752d0e47b4616a3fb,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:12.376638 kubelet[2351]: E0905 00:37:12.376591 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:12.377044 containerd[1560]: time="2025-09-05T00:37:12.376937308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:12.466444 kubelet[2351]: W0905 00:37:12.466330 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Sep 5 00:37:12.466444 kubelet[2351]: E0905 00:37:12.466430 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:12.548397 kubelet[2351]: W0905 00:37:12.548281 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Sep 5 00:37:12.548397 kubelet[2351]: E0905 00:37:12.548376 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:12.600857 kubelet[2351]: W0905 00:37:12.600589 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Sep 5 00:37:12.600857 kubelet[2351]: E0905 00:37:12.600712 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:12.704357 kubelet[2351]: E0905 00:37:12.704284 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Sep 5 00:37:12.783632 kubelet[2351]: W0905 00:37:12.783518 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Sep 5 00:37:12.783632 kubelet[2351]: E0905 00:37:12.783622 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:37:12.844828 containerd[1560]: time="2025-09-05T00:37:12.844776022Z" level=info msg="connecting to shim 574d2a3c3da1a45266c6406156b28496c8af8bd9741130fea50e516783f9066f" address="unix:///run/containerd/s/fed34809a187023bb717cea622d236325e6395cf8b8f003861beecb46f2d0e57" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:12.871003 containerd[1560]: time="2025-09-05T00:37:12.870791900Z" level=info msg="connecting to shim 2af83f618f402e4e782650ca8bde1ee4ae27015b079b95d69f8a103922431172" address="unix:///run/containerd/s/977fc061ffd1323ceeb16ee9fda298a69e9129e006b93ede67aa23c3fe266819" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:12.887747 containerd[1560]: time="2025-09-05T00:37:12.887154013Z" level=info msg="connecting to shim f70c5e0d4f322d9757dcf164da8c61c59aa598e8a5bc9fafbb5ab3ce5dfd182e" address="unix:///run/containerd/s/a223d3c278a2003432f5af4ae4a281b75f34689c23386b2283b0d74cffeabdbe" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:12.908143 systemd[1]: Started cri-containerd-574d2a3c3da1a45266c6406156b28496c8af8bd9741130fea50e516783f9066f.scope - libcontainer container 574d2a3c3da1a45266c6406156b28496c8af8bd9741130fea50e516783f9066f. Sep 5 00:37:12.919877 systemd[1]: Started cri-containerd-f70c5e0d4f322d9757dcf164da8c61c59aa598e8a5bc9fafbb5ab3ce5dfd182e.scope - libcontainer container f70c5e0d4f322d9757dcf164da8c61c59aa598e8a5bc9fafbb5ab3ce5dfd182e. Sep 5 00:37:12.928267 systemd[1]: Started cri-containerd-2af83f618f402e4e782650ca8bde1ee4ae27015b079b95d69f8a103922431172.scope - libcontainer container 2af83f618f402e4e782650ca8bde1ee4ae27015b079b95d69f8a103922431172. Sep 5 00:37:13.051992 containerd[1560]: time="2025-09-05T00:37:13.051785158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a68134af9481152752d0e47b4616a3fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2af83f618f402e4e782650ca8bde1ee4ae27015b079b95d69f8a103922431172\"" Sep 5 00:37:13.053279 kubelet[2351]: E0905 00:37:13.053234 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:13.055188 containerd[1560]: time="2025-09-05T00:37:13.055141033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70c5e0d4f322d9757dcf164da8c61c59aa598e8a5bc9fafbb5ab3ce5dfd182e\"" Sep 5 00:37:13.056185 kubelet[2351]: E0905 00:37:13.056151 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:13.056401 containerd[1560]: time="2025-09-05T00:37:13.056372431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"574d2a3c3da1a45266c6406156b28496c8af8bd9741130fea50e516783f9066f\"" Sep 5 00:37:13.056697 containerd[1560]: time="2025-09-05T00:37:13.056670337Z" level=info msg="CreateContainer within sandbox \"2af83f618f402e4e782650ca8bde1ee4ae27015b079b95d69f8a103922431172\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:37:13.058035 kubelet[2351]: E0905 00:37:13.058010 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:13.058483 containerd[1560]: time="2025-09-05T00:37:13.058449569Z" level=info msg="CreateContainer within sandbox \"f70c5e0d4f322d9757dcf164da8c61c59aa598e8a5bc9fafbb5ab3ce5dfd182e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:37:13.059481 containerd[1560]: time="2025-09-05T00:37:13.059420433Z" level=info msg="CreateContainer within sandbox \"574d2a3c3da1a45266c6406156b28496c8af8bd9741130fea50e516783f9066f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:37:13.071858 containerd[1560]: time="2025-09-05T00:37:13.071799001Z" level=info msg="Container 77366de1f954c92e556acefc3cf2d302c0fa68f9cf8a740bf4d2fb459a8146f9: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:13.074737 containerd[1560]: time="2025-09-05T00:37:13.074696376Z" level=info msg="Container 479f904f4134a1e11d0ed7d72284123e261361329407c0d18757bcde75ce6b8f: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:13.081951 containerd[1560]: time="2025-09-05T00:37:13.081908981Z" level=info msg="Container 7fe5b1de688876db4ea34314c466aa89df414ffa19162a3aefb44ce23859b565: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:13.114750 containerd[1560]: time="2025-09-05T00:37:13.114699469Z" level=info msg="CreateContainer within sandbox \"f70c5e0d4f322d9757dcf164da8c61c59aa598e8a5bc9fafbb5ab3ce5dfd182e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"77366de1f954c92e556acefc3cf2d302c0fa68f9cf8a740bf4d2fb459a8146f9\"" Sep 5 00:37:13.115396 containerd[1560]: time="2025-09-05T00:37:13.115362472Z" level=info msg="StartContainer for \"77366de1f954c92e556acefc3cf2d302c0fa68f9cf8a740bf4d2fb459a8146f9\"" Sep 5 00:37:13.116447 containerd[1560]: time="2025-09-05T00:37:13.116424454Z" level=info msg="connecting to shim 77366de1f954c92e556acefc3cf2d302c0fa68f9cf8a740bf4d2fb459a8146f9" address="unix:///run/containerd/s/a223d3c278a2003432f5af4ae4a281b75f34689c23386b2283b0d74cffeabdbe" protocol=ttrpc version=3 Sep 5 00:37:13.116701 containerd[1560]: time="2025-09-05T00:37:13.116660707Z" level=info msg="CreateContainer within sandbox \"2af83f618f402e4e782650ca8bde1ee4ae27015b079b95d69f8a103922431172\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"479f904f4134a1e11d0ed7d72284123e261361329407c0d18757bcde75ce6b8f\"" Sep 5 00:37:13.117057 containerd[1560]: time="2025-09-05T00:37:13.117021512Z" level=info msg="StartContainer for \"479f904f4134a1e11d0ed7d72284123e261361329407c0d18757bcde75ce6b8f\"" Sep 5 00:37:13.118191 containerd[1560]: time="2025-09-05T00:37:13.118154373Z" level=info msg="connecting to shim 479f904f4134a1e11d0ed7d72284123e261361329407c0d18757bcde75ce6b8f" address="unix:///run/containerd/s/977fc061ffd1323ceeb16ee9fda298a69e9129e006b93ede67aa23c3fe266819" protocol=ttrpc version=3 Sep 5 00:37:13.124653 containerd[1560]: time="2025-09-05T00:37:13.124438038Z" level=info msg="CreateContainer within sandbox \"574d2a3c3da1a45266c6406156b28496c8af8bd9741130fea50e516783f9066f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7fe5b1de688876db4ea34314c466aa89df414ffa19162a3aefb44ce23859b565\"" Sep 5 00:37:13.125840 containerd[1560]: time="2025-09-05T00:37:13.125800474Z" level=info msg="StartContainer for \"7fe5b1de688876db4ea34314c466aa89df414ffa19162a3aefb44ce23859b565\"" Sep 5 00:37:13.126830 containerd[1560]: time="2025-09-05T00:37:13.126799299Z" level=info msg="connecting to shim 7fe5b1de688876db4ea34314c466aa89df414ffa19162a3aefb44ce23859b565" address="unix:///run/containerd/s/fed34809a187023bb717cea622d236325e6395cf8b8f003861beecb46f2d0e57" protocol=ttrpc version=3 Sep 5 00:37:13.135188 kubelet[2351]: I0905 00:37:13.135149 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:37:13.135479 kubelet[2351]: E0905 00:37:13.135453 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Sep 5 00:37:13.139189 systemd[1]: Started cri-containerd-479f904f4134a1e11d0ed7d72284123e261361329407c0d18757bcde75ce6b8f.scope - libcontainer container 479f904f4134a1e11d0ed7d72284123e261361329407c0d18757bcde75ce6b8f. Sep 5 00:37:13.141289 systemd[1]: Started cri-containerd-77366de1f954c92e556acefc3cf2d302c0fa68f9cf8a740bf4d2fb459a8146f9.scope - libcontainer container 77366de1f954c92e556acefc3cf2d302c0fa68f9cf8a740bf4d2fb459a8146f9. Sep 5 00:37:13.148615 systemd[1]: Started cri-containerd-7fe5b1de688876db4ea34314c466aa89df414ffa19162a3aefb44ce23859b565.scope - libcontainer container 7fe5b1de688876db4ea34314c466aa89df414ffa19162a3aefb44ce23859b565. Sep 5 00:37:13.196488 containerd[1560]: time="2025-09-05T00:37:13.196417468Z" level=info msg="StartContainer for \"479f904f4134a1e11d0ed7d72284123e261361329407c0d18757bcde75ce6b8f\" returns successfully" Sep 5 00:37:13.217163 containerd[1560]: time="2025-09-05T00:37:13.217090703Z" level=info msg="StartContainer for \"7fe5b1de688876db4ea34314c466aa89df414ffa19162a3aefb44ce23859b565\" returns successfully" Sep 5 00:37:13.218209 containerd[1560]: time="2025-09-05T00:37:13.218165568Z" level=info msg="StartContainer for \"77366de1f954c92e556acefc3cf2d302c0fa68f9cf8a740bf4d2fb459a8146f9\" returns successfully" Sep 5 00:37:13.331938 kubelet[2351]: E0905 00:37:13.331862 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:13.333909 kubelet[2351]: E0905 00:37:13.333309 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:13.335824 kubelet[2351]: E0905 00:37:13.335792 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:14.338559 kubelet[2351]: E0905 00:37:14.338509 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:14.738729 kubelet[2351]: I0905 00:37:14.738139 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:37:15.109126 kubelet[2351]: E0905 00:37:15.108980 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:37:15.416309 kubelet[2351]: I0905 00:37:15.416041 2351 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:37:15.416309 kubelet[2351]: E0905 00:37:15.416134 2351 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:37:15.437929 kubelet[2351]: E0905 00:37:15.437738 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:15.538328 kubelet[2351]: E0905 00:37:15.538261 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:15.639418 kubelet[2351]: E0905 00:37:15.639326 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:16.294484 kubelet[2351]: I0905 00:37:16.294430 2351 apiserver.go:52] "Watching apiserver" Sep 5 00:37:16.300079 kubelet[2351]: I0905 00:37:16.300023 2351 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:37:17.642286 systemd[1]: Reload requested from client PID 2629 ('systemctl') (unit session-7.scope)... Sep 5 00:37:17.642304 systemd[1]: Reloading... Sep 5 00:37:17.764931 zram_generator::config[2672]: No configuration found. Sep 5 00:37:18.039947 systemd[1]: Reloading finished in 397 ms. Sep 5 00:37:18.074718 kubelet[2351]: I0905 00:37:18.074618 2351 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:37:18.074867 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:18.088004 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:37:18.088439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:18.088518 systemd[1]: kubelet.service: Consumed 1.144s CPU time, 131.3M memory peak. Sep 5 00:37:18.092285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:18.415161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:18.420784 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:37:18.474759 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:37:18.474759 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:37:18.474759 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:37:18.475324 kubelet[2717]: I0905 00:37:18.474829 2717 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:37:18.482289 kubelet[2717]: I0905 00:37:18.482250 2717 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:37:18.482289 kubelet[2717]: I0905 00:37:18.482281 2717 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:37:18.482574 kubelet[2717]: I0905 00:37:18.482543 2717 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:37:18.483916 kubelet[2717]: I0905 00:37:18.483860 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:37:18.485714 kubelet[2717]: I0905 00:37:18.485665 2717 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:37:18.490708 kubelet[2717]: I0905 00:37:18.490667 2717 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:37:18.496608 kubelet[2717]: I0905 00:37:18.496575 2717 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:37:18.496796 kubelet[2717]: I0905 00:37:18.496778 2717 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:37:18.497037 kubelet[2717]: I0905 00:37:18.496988 2717 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:37:18.497204 kubelet[2717]: I0905 00:37:18.497026 2717 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:37:18.497284 kubelet[2717]: I0905 00:37:18.497215 2717 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:37:18.497284 kubelet[2717]: I0905 00:37:18.497225 2717 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:37:18.497284 kubelet[2717]: I0905 00:37:18.497260 2717 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:37:18.497404 kubelet[2717]: I0905 00:37:18.497389 2717 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:37:18.497428 kubelet[2717]: I0905 00:37:18.497405 2717 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:37:18.497453 kubelet[2717]: I0905 00:37:18.497440 2717 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:37:18.497453 kubelet[2717]: I0905 00:37:18.497451 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:37:18.498317 kubelet[2717]: I0905 00:37:18.498231 2717 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 00:37:18.498850 kubelet[2717]: I0905 00:37:18.498836 2717 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:37:18.499809 kubelet[2717]: I0905 00:37:18.499793 2717 server.go:1274] "Started kubelet" Sep 5 00:37:18.500215 kubelet[2717]: I0905 00:37:18.500148 2717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:37:18.501215 kubelet[2717]: I0905 00:37:18.501168 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:37:18.501658 kubelet[2717]: I0905 00:37:18.501629 2717 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:37:18.502202 kubelet[2717]: I0905 00:37:18.502187 2717 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:37:18.504226 kubelet[2717]: I0905 00:37:18.504181 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:37:18.507392 kubelet[2717]: I0905 00:37:18.507345 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:37:18.510515 kubelet[2717]: E0905 00:37:18.510401 2717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:18.510613 kubelet[2717]: I0905 00:37:18.510526 2717 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:37:18.511873 kubelet[2717]: E0905 00:37:18.511838 2717 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:37:18.514064 kubelet[2717]: I0905 00:37:18.514033 2717 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:37:18.514137 kubelet[2717]: I0905 00:37:18.514077 2717 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:37:18.514352 kubelet[2717]: I0905 00:37:18.514330 2717 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:37:18.514352 kubelet[2717]: I0905 00:37:18.514350 2717 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:37:18.514454 kubelet[2717]: I0905 00:37:18.514432 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:37:18.534503 kubelet[2717]: I0905 00:37:18.534457 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:37:18.536235 kubelet[2717]: I0905 00:37:18.536147 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:37:18.536235 kubelet[2717]: I0905 00:37:18.536178 2717 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:37:18.536235 kubelet[2717]: I0905 00:37:18.536203 2717 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:37:18.536431 kubelet[2717]: E0905 00:37:18.536251 2717 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:37:18.568632 kubelet[2717]: I0905 00:37:18.568585 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:37:18.568632 kubelet[2717]: I0905 00:37:18.568606 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:37:18.568632 kubelet[2717]: I0905 00:37:18.568628 2717 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:37:18.568851 kubelet[2717]: I0905 00:37:18.568830 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:37:18.568883 kubelet[2717]: I0905 00:37:18.568845 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:37:18.568883 kubelet[2717]: I0905 00:37:18.568867 2717 policy_none.go:49] "None policy: Start" Sep 5 00:37:18.569586 kubelet[2717]: I0905 00:37:18.569561 2717 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:37:18.569586 kubelet[2717]: I0905 00:37:18.569586 2717 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:37:18.569754 kubelet[2717]: I0905 00:37:18.569736 2717 state_mem.go:75] "Updated machine memory state" Sep 5 00:37:18.612303 kubelet[2717]: I0905 00:37:18.612260 2717 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:37:18.612544 kubelet[2717]: I0905 00:37:18.612500 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:37:18.612544 kubelet[2717]: I0905 00:37:18.612512 2717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:37:18.612812 kubelet[2717]: I0905 00:37:18.612794 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:37:18.715112 kubelet[2717]: I0905 00:37:18.714818 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:18.715112 kubelet[2717]: I0905 00:37:18.714879 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:18.715112 kubelet[2717]: I0905 00:37:18.714955 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a68134af9481152752d0e47b4616a3fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a68134af9481152752d0e47b4616a3fb\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:18.716008 kubelet[2717]: I0905 00:37:18.715593 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a68134af9481152752d0e47b4616a3fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a68134af9481152752d0e47b4616a3fb\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:18.716008 kubelet[2717]: I0905 00:37:18.715625 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:18.716008 kubelet[2717]: I0905 00:37:18.715655 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:18.716008 kubelet[2717]: I0905 00:37:18.715679 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a68134af9481152752d0e47b4616a3fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a68134af9481152752d0e47b4616a3fb\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:18.716008 kubelet[2717]: I0905 00:37:18.715697 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:18.716248 kubelet[2717]: I0905 00:37:18.715716 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:18.726661 kubelet[2717]: I0905 00:37:18.726612 2717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:37:18.736340 kubelet[2717]: I0905 00:37:18.736275 2717 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 00:37:18.736526 kubelet[2717]: I0905 00:37:18.736383 2717 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:37:18.952875 kubelet[2717]: E0905 00:37:18.952773 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:18.953618 kubelet[2717]: E0905 00:37:18.953488 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:18.956238 kubelet[2717]: E0905 00:37:18.956007 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:19.499010 kubelet[2717]: I0905 00:37:19.498954 2717 apiserver.go:52] "Watching apiserver" Sep 5 00:37:19.515264 kubelet[2717]: I0905 00:37:19.515210 2717 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:37:19.552188 kubelet[2717]: E0905 00:37:19.552104 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:19.552188 kubelet[2717]: E0905 00:37:19.552106 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:19.562919 kubelet[2717]: E0905 00:37:19.562832 2717 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:19.563909 kubelet[2717]: E0905 00:37:19.563155 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:19.589825 kubelet[2717]: I0905 00:37:19.589734 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5897161 podStartE2EDuration="1.5897161s" podCreationTimestamp="2025-09-05 00:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:19.580607309 +0000 UTC m=+1.152930223" watchObservedRunningTime="2025-09-05 00:37:19.5897161 +0000 UTC m=+1.162039013" Sep 5 00:37:19.599210 kubelet[2717]: I0905 00:37:19.599131 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.59911195 podStartE2EDuration="1.59911195s" podCreationTimestamp="2025-09-05 00:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:19.590231298 +0000 UTC m=+1.162554211" watchObservedRunningTime="2025-09-05 00:37:19.59911195 +0000 UTC m=+1.171434863" Sep 5 00:37:19.727182 kubelet[2717]: I0905 00:37:19.727086 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.727060819 podStartE2EDuration="1.727060819s" podCreationTimestamp="2025-09-05 00:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:19.599747231 +0000 UTC m=+1.172070154" watchObservedRunningTime="2025-09-05 00:37:19.727060819 +0000 UTC m=+1.299383742" Sep 5 00:37:20.553627 kubelet[2717]: E0905 00:37:20.553564 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:21.555395 kubelet[2717]: E0905 00:37:21.555346 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:24.011813 kubelet[2717]: I0905 00:37:24.011770 2717 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:37:24.012383 kubelet[2717]: I0905 00:37:24.012358 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:37:24.012430 containerd[1560]: time="2025-09-05T00:37:24.012103583Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:37:24.856691 systemd[1]: Created slice kubepods-besteffort-pod3f91c3cd_2255_45c1_bc06_f39a2dea7273.slice - libcontainer container kubepods-besteffort-pod3f91c3cd_2255_45c1_bc06_f39a2dea7273.slice. Sep 5 00:37:24.954393 kubelet[2717]: I0905 00:37:24.954335 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f91c3cd-2255-45c1-bc06-f39a2dea7273-kube-proxy\") pod \"kube-proxy-5wtsj\" (UID: \"3f91c3cd-2255-45c1-bc06-f39a2dea7273\") " pod="kube-system/kube-proxy-5wtsj" Sep 5 00:37:24.954393 kubelet[2717]: I0905 00:37:24.954388 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f91c3cd-2255-45c1-bc06-f39a2dea7273-lib-modules\") pod \"kube-proxy-5wtsj\" (UID: \"3f91c3cd-2255-45c1-bc06-f39a2dea7273\") " pod="kube-system/kube-proxy-5wtsj" Sep 5 00:37:24.954393 kubelet[2717]: I0905 00:37:24.954414 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f91c3cd-2255-45c1-bc06-f39a2dea7273-xtables-lock\") pod \"kube-proxy-5wtsj\" (UID: \"3f91c3cd-2255-45c1-bc06-f39a2dea7273\") " pod="kube-system/kube-proxy-5wtsj" Sep 5 00:37:24.954662 kubelet[2717]: I0905 00:37:24.954434 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhl9q\" (UniqueName: \"kubernetes.io/projected/3f91c3cd-2255-45c1-bc06-f39a2dea7273-kube-api-access-mhl9q\") pod \"kube-proxy-5wtsj\" (UID: \"3f91c3cd-2255-45c1-bc06-f39a2dea7273\") " pod="kube-system/kube-proxy-5wtsj" Sep 5 00:37:24.987359 systemd[1]: Created slice kubepods-besteffort-pod2515f6a2_0be8_4455_97e9_744b174468a3.slice - libcontainer container kubepods-besteffort-pod2515f6a2_0be8_4455_97e9_744b174468a3.slice. Sep 5 00:37:25.051160 kubelet[2717]: E0905 00:37:25.051088 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:25.054832 kubelet[2717]: I0905 00:37:25.054778 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbsxh\" (UniqueName: \"kubernetes.io/projected/2515f6a2-0be8-4455-97e9-744b174468a3-kube-api-access-hbsxh\") pod \"tigera-operator-58fc44c59b-trccv\" (UID: \"2515f6a2-0be8-4455-97e9-744b174468a3\") " pod="tigera-operator/tigera-operator-58fc44c59b-trccv" Sep 5 00:37:25.054832 kubelet[2717]: I0905 00:37:25.054842 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2515f6a2-0be8-4455-97e9-744b174468a3-var-lib-calico\") pod \"tigera-operator-58fc44c59b-trccv\" (UID: \"2515f6a2-0be8-4455-97e9-744b174468a3\") " pod="tigera-operator/tigera-operator-58fc44c59b-trccv" Sep 5 00:37:25.169243 kubelet[2717]: E0905 00:37:25.169096 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:25.170770 containerd[1560]: time="2025-09-05T00:37:25.169760173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wtsj,Uid:3f91c3cd-2255-45c1-bc06-f39a2dea7273,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:25.197942 containerd[1560]: time="2025-09-05T00:37:25.196547194Z" level=info msg="connecting to shim 368bd2e2d990b5d6219f3a94aa26461bad1abc4906a1ba6bcd240881fc7a9ef0" address="unix:///run/containerd/s/3efd63334186aa0e1c5205ff898b246d3f4dea0af576a3a8b48b7c81293490aa" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:25.244207 systemd[1]: Started cri-containerd-368bd2e2d990b5d6219f3a94aa26461bad1abc4906a1ba6bcd240881fc7a9ef0.scope - libcontainer container 368bd2e2d990b5d6219f3a94aa26461bad1abc4906a1ba6bcd240881fc7a9ef0. Sep 5 00:37:25.275062 containerd[1560]: time="2025-09-05T00:37:25.274986030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wtsj,Uid:3f91c3cd-2255-45c1-bc06-f39a2dea7273,Namespace:kube-system,Attempt:0,} returns sandbox id \"368bd2e2d990b5d6219f3a94aa26461bad1abc4906a1ba6bcd240881fc7a9ef0\"" Sep 5 00:37:25.276290 kubelet[2717]: E0905 00:37:25.276217 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:25.278801 containerd[1560]: time="2025-09-05T00:37:25.278727713Z" level=info msg="CreateContainer within sandbox \"368bd2e2d990b5d6219f3a94aa26461bad1abc4906a1ba6bcd240881fc7a9ef0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:37:25.291747 containerd[1560]: time="2025-09-05T00:37:25.291679846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-trccv,Uid:2515f6a2-0be8-4455-97e9-744b174468a3,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:37:25.293182 containerd[1560]: time="2025-09-05T00:37:25.293124313Z" level=info msg="Container ab8f1be73ff3db8c9ab56aca5cd8e9f6800a761db28fe6e23570356f7e1528d7: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:25.304782 containerd[1560]: time="2025-09-05T00:37:25.304693507Z" level=info msg="CreateContainer within sandbox \"368bd2e2d990b5d6219f3a94aa26461bad1abc4906a1ba6bcd240881fc7a9ef0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab8f1be73ff3db8c9ab56aca5cd8e9f6800a761db28fe6e23570356f7e1528d7\"" Sep 5 00:37:25.305634 containerd[1560]: time="2025-09-05T00:37:25.305484858Z" level=info msg="StartContainer for \"ab8f1be73ff3db8c9ab56aca5cd8e9f6800a761db28fe6e23570356f7e1528d7\"" Sep 5 00:37:25.307465 containerd[1560]: time="2025-09-05T00:37:25.307432305Z" level=info msg="connecting to shim ab8f1be73ff3db8c9ab56aca5cd8e9f6800a761db28fe6e23570356f7e1528d7" address="unix:///run/containerd/s/3efd63334186aa0e1c5205ff898b246d3f4dea0af576a3a8b48b7c81293490aa" protocol=ttrpc version=3 Sep 5 00:37:25.330565 containerd[1560]: time="2025-09-05T00:37:25.330494854Z" level=info msg="connecting to shim a2b0f148762136636904fd19eceaded14cd658dcd775cfeb39af69cee1645226" address="unix:///run/containerd/s/8b7cb8cf17b8b650ccc6e8d4606ba17b4485b4216b49ab67c35ada8006942bc7" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:25.333233 systemd[1]: Started cri-containerd-ab8f1be73ff3db8c9ab56aca5cd8e9f6800a761db28fe6e23570356f7e1528d7.scope - libcontainer container ab8f1be73ff3db8c9ab56aca5cd8e9f6800a761db28fe6e23570356f7e1528d7. Sep 5 00:37:25.368304 systemd[1]: Started cri-containerd-a2b0f148762136636904fd19eceaded14cd658dcd775cfeb39af69cee1645226.scope - libcontainer container a2b0f148762136636904fd19eceaded14cd658dcd775cfeb39af69cee1645226. Sep 5 00:37:25.404433 containerd[1560]: time="2025-09-05T00:37:25.404361678Z" level=info msg="StartContainer for \"ab8f1be73ff3db8c9ab56aca5cd8e9f6800a761db28fe6e23570356f7e1528d7\" returns successfully" Sep 5 00:37:25.425172 containerd[1560]: time="2025-09-05T00:37:25.425016695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-trccv,Uid:2515f6a2-0be8-4455-97e9-744b174468a3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a2b0f148762136636904fd19eceaded14cd658dcd775cfeb39af69cee1645226\"" Sep 5 00:37:25.429231 containerd[1560]: time="2025-09-05T00:37:25.429183273Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:37:25.571937 kubelet[2717]: E0905 00:37:25.571854 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:25.572427 kubelet[2717]: E0905 00:37:25.572381 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:25.597313 kubelet[2717]: I0905 00:37:25.597222 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5wtsj" podStartSLOduration=1.597197757 podStartE2EDuration="1.597197757s" podCreationTimestamp="2025-09-05 00:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:25.597049036 +0000 UTC m=+7.169371949" watchObservedRunningTime="2025-09-05 00:37:25.597197757 +0000 UTC m=+7.169520700" Sep 5 00:37:25.615817 kubelet[2717]: E0905 00:37:25.615769 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:26.573996 kubelet[2717]: E0905 00:37:26.573943 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:26.767128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457314379.mount: Deactivated successfully. Sep 5 00:37:26.844813 kubelet[2717]: E0905 00:37:26.844392 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:27.575477 kubelet[2717]: E0905 00:37:27.575429 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:27.576514 kubelet[2717]: E0905 00:37:27.576093 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:27.783344 containerd[1560]: time="2025-09-05T00:37:27.783270938Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:27.784735 containerd[1560]: time="2025-09-05T00:37:27.784676911Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:37:27.786770 containerd[1560]: time="2025-09-05T00:37:27.786682747Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:27.789538 containerd[1560]: time="2025-09-05T00:37:27.789481442Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:27.790062 containerd[1560]: time="2025-09-05T00:37:27.790023801Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.360788522s" Sep 5 00:37:27.790062 containerd[1560]: time="2025-09-05T00:37:27.790058638Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:37:27.792756 containerd[1560]: time="2025-09-05T00:37:27.792711393Z" level=info msg="CreateContainer within sandbox \"a2b0f148762136636904fd19eceaded14cd658dcd775cfeb39af69cee1645226\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:37:27.808560 containerd[1560]: time="2025-09-05T00:37:27.808490643Z" level=info msg="Container 5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:27.818988 containerd[1560]: time="2025-09-05T00:37:27.818920546Z" level=info msg="CreateContainer within sandbox \"a2b0f148762136636904fd19eceaded14cd658dcd775cfeb39af69cee1645226\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4\"" Sep 5 00:37:27.819525 containerd[1560]: time="2025-09-05T00:37:27.819488181Z" level=info msg="StartContainer for \"5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4\"" Sep 5 00:37:27.820623 containerd[1560]: time="2025-09-05T00:37:27.820598325Z" level=info msg="connecting to shim 5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4" address="unix:///run/containerd/s/8b7cb8cf17b8b650ccc6e8d4606ba17b4485b4216b49ab67c35ada8006942bc7" protocol=ttrpc version=3 Sep 5 00:37:27.880239 systemd[1]: Started cri-containerd-5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4.scope - libcontainer container 5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4. Sep 5 00:37:28.012696 containerd[1560]: time="2025-09-05T00:37:28.012651042Z" level=info msg="StartContainer for \"5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4\" returns successfully" Sep 5 00:37:29.103343 update_engine[1549]: I20250905 00:37:29.103184 1549 update_attempter.cc:509] Updating boot flags... Sep 5 00:37:30.291356 systemd[1]: cri-containerd-5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4.scope: Deactivated successfully. Sep 5 00:37:30.294383 containerd[1560]: time="2025-09-05T00:37:30.294328624Z" level=info msg="received exit event container_id:\"5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4\" id:\"5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4\" pid:3046 exit_status:1 exited_at:{seconds:1757032650 nanos:292859820}" Sep 5 00:37:30.294803 containerd[1560]: time="2025-09-05T00:37:30.294526640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4\" id:\"5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4\" pid:3046 exit_status:1 exited_at:{seconds:1757032650 nanos:292859820}" Sep 5 00:37:30.338223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4-rootfs.mount: Deactivated successfully. Sep 5 00:37:31.585620 kubelet[2717]: I0905 00:37:31.585331 2717 scope.go:117] "RemoveContainer" containerID="5dc90af8a66c4c9c256b3a581e3bbbdabeb557a38d9fb14088c53a3761346ff4" Sep 5 00:37:31.587326 containerd[1560]: time="2025-09-05T00:37:31.586978609Z" level=info msg="CreateContainer within sandbox \"a2b0f148762136636904fd19eceaded14cd658dcd775cfeb39af69cee1645226\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 5 00:37:31.602832 containerd[1560]: time="2025-09-05T00:37:31.602061466Z" level=info msg="Container 66f7188de7cff6fa1da4f3f61901046b655871672e3fa90858816abe5ed91f56: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:31.609669 containerd[1560]: time="2025-09-05T00:37:31.609612946Z" level=info msg="CreateContainer within sandbox \"a2b0f148762136636904fd19eceaded14cd658dcd775cfeb39af69cee1645226\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"66f7188de7cff6fa1da4f3f61901046b655871672e3fa90858816abe5ed91f56\"" Sep 5 00:37:31.611433 containerd[1560]: time="2025-09-05T00:37:31.610293158Z" level=info msg="StartContainer for \"66f7188de7cff6fa1da4f3f61901046b655871672e3fa90858816abe5ed91f56\"" Sep 5 00:37:31.611433 containerd[1560]: time="2025-09-05T00:37:31.611316864Z" level=info msg="connecting to shim 66f7188de7cff6fa1da4f3f61901046b655871672e3fa90858816abe5ed91f56" address="unix:///run/containerd/s/8b7cb8cf17b8b650ccc6e8d4606ba17b4485b4216b49ab67c35ada8006942bc7" protocol=ttrpc version=3 Sep 5 00:37:31.637123 systemd[1]: Started cri-containerd-66f7188de7cff6fa1da4f3f61901046b655871672e3fa90858816abe5ed91f56.scope - libcontainer container 66f7188de7cff6fa1da4f3f61901046b655871672e3fa90858816abe5ed91f56. Sep 5 00:37:31.678269 containerd[1560]: time="2025-09-05T00:37:31.678229029Z" level=info msg="StartContainer for \"66f7188de7cff6fa1da4f3f61901046b655871672e3fa90858816abe5ed91f56\" returns successfully" Sep 5 00:37:32.633392 kubelet[2717]: I0905 00:37:32.633259 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-trccv" podStartSLOduration=6.268570058 podStartE2EDuration="8.633240022s" podCreationTimestamp="2025-09-05 00:37:24 +0000 UTC" firstStartedPulling="2025-09-05 00:37:25.426545089 +0000 UTC m=+6.998868002" lastFinishedPulling="2025-09-05 00:37:27.791215043 +0000 UTC m=+9.363537966" observedRunningTime="2025-09-05 00:37:28.602234848 +0000 UTC m=+10.174557761" watchObservedRunningTime="2025-09-05 00:37:32.633240022 +0000 UTC m=+14.205562935" Sep 5 00:37:33.828095 sudo[1777]: pam_unix(sudo:session): session closed for user root Sep 5 00:37:33.830248 sshd[1776]: Connection closed by 10.0.0.1 port 48090 Sep 5 00:37:33.840259 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:33.845327 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:48090.service: Deactivated successfully. Sep 5 00:37:33.848257 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:37:33.848551 systemd[1]: session-7.scope: Consumed 6.928s CPU time, 227.4M memory peak. Sep 5 00:37:33.850403 systemd-logind[1537]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:37:33.851863 systemd-logind[1537]: Removed session 7. Sep 5 00:37:38.172580 systemd[1]: Created slice kubepods-besteffort-pod4fc47226_68c5_41d3_9d32_24e12c87de63.slice - libcontainer container kubepods-besteffort-pod4fc47226_68c5_41d3_9d32_24e12c87de63.slice. Sep 5 00:37:38.244119 kubelet[2717]: I0905 00:37:38.243931 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fc47226-68c5-41d3-9d32-24e12c87de63-tigera-ca-bundle\") pod \"calico-typha-64c78498f9-w4k8w\" (UID: \"4fc47226-68c5-41d3-9d32-24e12c87de63\") " pod="calico-system/calico-typha-64c78498f9-w4k8w" Sep 5 00:37:38.244119 kubelet[2717]: I0905 00:37:38.244025 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4fc47226-68c5-41d3-9d32-24e12c87de63-typha-certs\") pod \"calico-typha-64c78498f9-w4k8w\" (UID: \"4fc47226-68c5-41d3-9d32-24e12c87de63\") " pod="calico-system/calico-typha-64c78498f9-w4k8w" Sep 5 00:37:38.244119 kubelet[2717]: I0905 00:37:38.244049 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5c9n\" (UniqueName: \"kubernetes.io/projected/4fc47226-68c5-41d3-9d32-24e12c87de63-kube-api-access-j5c9n\") pod \"calico-typha-64c78498f9-w4k8w\" (UID: \"4fc47226-68c5-41d3-9d32-24e12c87de63\") " pod="calico-system/calico-typha-64c78498f9-w4k8w" Sep 5 00:37:38.461494 systemd[1]: Created slice kubepods-besteffort-poda007fcce_2ad7_451c_808a_48154dcd5f41.slice - libcontainer container kubepods-besteffort-poda007fcce_2ad7_451c_808a_48154dcd5f41.slice. Sep 5 00:37:38.478320 kubelet[2717]: E0905 00:37:38.478269 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:38.479021 containerd[1560]: time="2025-09-05T00:37:38.478942618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c78498f9-w4k8w,Uid:4fc47226-68c5-41d3-9d32-24e12c87de63,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:38.530761 containerd[1560]: time="2025-09-05T00:37:38.530705554Z" level=info msg="connecting to shim 1720e82366072376f340f7c865e5f9ae91f541883de21732cf598c69eae03166" address="unix:///run/containerd/s/4f5255a131debbc813a542d67f5d465989f8bc28750c4ce0fea248aef05eace6" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:38.546844 kubelet[2717]: I0905 00:37:38.546795 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-var-lib-calico\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.546844 kubelet[2717]: I0905 00:37:38.546835 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-var-run-calico\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.546844 kubelet[2717]: I0905 00:37:38.546854 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-xtables-lock\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547105 kubelet[2717]: I0905 00:37:38.546869 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a007fcce-2ad7-451c-808a-48154dcd5f41-tigera-ca-bundle\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547105 kubelet[2717]: I0905 00:37:38.546883 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-cni-bin-dir\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547105 kubelet[2717]: I0905 00:37:38.546914 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-cni-net-dir\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547105 kubelet[2717]: I0905 00:37:38.546929 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a007fcce-2ad7-451c-808a-48154dcd5f41-node-certs\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547105 kubelet[2717]: I0905 00:37:38.546964 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-flexvol-driver-host\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547278 kubelet[2717]: I0905 00:37:38.547032 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-lib-modules\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547278 kubelet[2717]: I0905 00:37:38.547087 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4p4x\" (UniqueName: \"kubernetes.io/projected/a007fcce-2ad7-451c-808a-48154dcd5f41-kube-api-access-t4p4x\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547278 kubelet[2717]: I0905 00:37:38.547107 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-cni-log-dir\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.547278 kubelet[2717]: I0905 00:37:38.547166 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a007fcce-2ad7-451c-808a-48154dcd5f41-policysync\") pod \"calico-node-fz799\" (UID: \"a007fcce-2ad7-451c-808a-48154dcd5f41\") " pod="calico-system/calico-node-fz799" Sep 5 00:37:38.561185 systemd[1]: Started cri-containerd-1720e82366072376f340f7c865e5f9ae91f541883de21732cf598c69eae03166.scope - libcontainer container 1720e82366072376f340f7c865e5f9ae91f541883de21732cf598c69eae03166. Sep 5 00:37:38.610425 containerd[1560]: time="2025-09-05T00:37:38.610264395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c78498f9-w4k8w,Uid:4fc47226-68c5-41d3-9d32-24e12c87de63,Namespace:calico-system,Attempt:0,} returns sandbox id \"1720e82366072376f340f7c865e5f9ae91f541883de21732cf598c69eae03166\"" Sep 5 00:37:38.611240 kubelet[2717]: E0905 00:37:38.611199 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:38.612873 containerd[1560]: time="2025-09-05T00:37:38.612828708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:37:38.647950 kubelet[2717]: E0905 00:37:38.647856 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sr49w" podUID="4f591032-6561-4a75-a88e-cbe4c8ece143" Sep 5 00:37:38.653023 kubelet[2717]: E0905 00:37:38.652966 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.653360 kubelet[2717]: W0905 00:37:38.653120 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.653360 kubelet[2717]: E0905 00:37:38.653149 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.653609 kubelet[2717]: E0905 00:37:38.653577 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.653609 kubelet[2717]: W0905 00:37:38.653595 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.653609 kubelet[2717]: E0905 00:37:38.653606 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.664309 kubelet[2717]: E0905 00:37:38.664270 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.664309 kubelet[2717]: W0905 00:37:38.664297 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.664462 kubelet[2717]: E0905 00:37:38.664322 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.738650 kubelet[2717]: E0905 00:37:38.738505 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.738650 kubelet[2717]: W0905 00:37:38.738575 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.738650 kubelet[2717]: E0905 00:37:38.738603 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.739040 kubelet[2717]: E0905 00:37:38.739012 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.739089 kubelet[2717]: W0905 00:37:38.739042 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.739089 kubelet[2717]: E0905 00:37:38.739056 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.739370 kubelet[2717]: E0905 00:37:38.739333 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.739370 kubelet[2717]: W0905 00:37:38.739352 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.739370 kubelet[2717]: E0905 00:37:38.739365 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.739686 kubelet[2717]: E0905 00:37:38.739648 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.739748 kubelet[2717]: W0905 00:37:38.739734 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.739784 kubelet[2717]: E0905 00:37:38.739750 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.740024 kubelet[2717]: E0905 00:37:38.740002 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.740024 kubelet[2717]: W0905 00:37:38.740021 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.740153 kubelet[2717]: E0905 00:37:38.740033 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.740435 kubelet[2717]: E0905 00:37:38.740217 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.740435 kubelet[2717]: W0905 00:37:38.740231 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.740533 kubelet[2717]: E0905 00:37:38.740467 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.740695 kubelet[2717]: E0905 00:37:38.740671 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.740695 kubelet[2717]: W0905 00:37:38.740689 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.740783 kubelet[2717]: E0905 00:37:38.740700 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.741022 kubelet[2717]: E0905 00:37:38.740992 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.741022 kubelet[2717]: W0905 00:37:38.741007 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.741022 kubelet[2717]: E0905 00:37:38.741019 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.742021 kubelet[2717]: E0905 00:37:38.741912 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.742021 kubelet[2717]: W0905 00:37:38.741929 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.742021 kubelet[2717]: E0905 00:37:38.741942 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.742193 kubelet[2717]: E0905 00:37:38.742153 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.742193 kubelet[2717]: W0905 00:37:38.742189 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.742247 kubelet[2717]: E0905 00:37:38.742203 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.742551 kubelet[2717]: E0905 00:37:38.742502 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.742551 kubelet[2717]: W0905 00:37:38.742542 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.742551 kubelet[2717]: E0905 00:37:38.742554 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.742900 kubelet[2717]: E0905 00:37:38.742839 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.742951 kubelet[2717]: W0905 00:37:38.742879 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.744160 kubelet[2717]: E0905 00:37:38.742979 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.744160 kubelet[2717]: E0905 00:37:38.743373 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.744160 kubelet[2717]: W0905 00:37:38.743386 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.744160 kubelet[2717]: E0905 00:37:38.743398 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.744160 kubelet[2717]: E0905 00:37:38.743824 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.744160 kubelet[2717]: W0905 00:37:38.743837 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.744160 kubelet[2717]: E0905 00:37:38.743848 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.744350 kubelet[2717]: E0905 00:37:38.744268 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.744350 kubelet[2717]: W0905 00:37:38.744281 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.744350 kubelet[2717]: E0905 00:37:38.744293 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.744571 kubelet[2717]: E0905 00:37:38.744542 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.744571 kubelet[2717]: W0905 00:37:38.744558 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.744943 kubelet[2717]: E0905 00:37:38.744569 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.745186 kubelet[2717]: E0905 00:37:38.745164 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.745186 kubelet[2717]: W0905 00:37:38.745182 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.745271 kubelet[2717]: E0905 00:37:38.745196 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.745672 kubelet[2717]: E0905 00:37:38.745451 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.745672 kubelet[2717]: W0905 00:37:38.745482 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.745672 kubelet[2717]: E0905 00:37:38.745509 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.745828 kubelet[2717]: E0905 00:37:38.745816 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.745860 kubelet[2717]: W0905 00:37:38.745830 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.745860 kubelet[2717]: E0905 00:37:38.745845 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.746102 kubelet[2717]: E0905 00:37:38.746080 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.746102 kubelet[2717]: W0905 00:37:38.746097 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.746173 kubelet[2717]: E0905 00:37:38.746108 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.748611 kubelet[2717]: E0905 00:37:38.748431 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.748611 kubelet[2717]: W0905 00:37:38.748452 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.748611 kubelet[2717]: E0905 00:37:38.748465 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.748611 kubelet[2717]: I0905 00:37:38.748498 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f591032-6561-4a75-a88e-cbe4c8ece143-registration-dir\") pod \"csi-node-driver-sr49w\" (UID: \"4f591032-6561-4a75-a88e-cbe4c8ece143\") " pod="calico-system/csi-node-driver-sr49w" Sep 5 00:37:38.749001 kubelet[2717]: E0905 00:37:38.748982 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.749121 kubelet[2717]: W0905 00:37:38.749071 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.749121 kubelet[2717]: E0905 00:37:38.749098 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.749121 kubelet[2717]: I0905 00:37:38.749118 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4f591032-6561-4a75-a88e-cbe4c8ece143-varrun\") pod \"csi-node-driver-sr49w\" (UID: \"4f591032-6561-4a75-a88e-cbe4c8ece143\") " pod="calico-system/csi-node-driver-sr49w" Sep 5 00:37:38.749739 kubelet[2717]: E0905 00:37:38.749644 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.749739 kubelet[2717]: W0905 00:37:38.749663 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.749739 kubelet[2717]: E0905 00:37:38.749677 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.750022 kubelet[2717]: E0905 00:37:38.749921 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.750022 kubelet[2717]: W0905 00:37:38.749933 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.750022 kubelet[2717]: E0905 00:37:38.749945 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.750177 kubelet[2717]: E0905 00:37:38.750155 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.750177 kubelet[2717]: W0905 00:37:38.750170 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.750263 kubelet[2717]: E0905 00:37:38.750184 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.750263 kubelet[2717]: I0905 00:37:38.750205 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cplfs\" (UniqueName: \"kubernetes.io/projected/4f591032-6561-4a75-a88e-cbe4c8ece143-kube-api-access-cplfs\") pod \"csi-node-driver-sr49w\" (UID: \"4f591032-6561-4a75-a88e-cbe4c8ece143\") " pod="calico-system/csi-node-driver-sr49w" Sep 5 00:37:38.750456 kubelet[2717]: E0905 00:37:38.750432 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.750456 kubelet[2717]: W0905 00:37:38.750450 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.751528 kubelet[2717]: E0905 00:37:38.750720 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.751528 kubelet[2717]: I0905 00:37:38.750761 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f591032-6561-4a75-a88e-cbe4c8ece143-socket-dir\") pod \"csi-node-driver-sr49w\" (UID: \"4f591032-6561-4a75-a88e-cbe4c8ece143\") " pod="calico-system/csi-node-driver-sr49w" Sep 5 00:37:38.751528 kubelet[2717]: E0905 00:37:38.750864 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.751528 kubelet[2717]: W0905 00:37:38.750874 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.751528 kubelet[2717]: E0905 00:37:38.751123 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.751528 kubelet[2717]: E0905 00:37:38.751254 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.751528 kubelet[2717]: W0905 00:37:38.751266 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.751528 kubelet[2717]: E0905 00:37:38.751286 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.752365 kubelet[2717]: E0905 00:37:38.752345 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.752365 kubelet[2717]: W0905 00:37:38.752361 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.752444 kubelet[2717]: E0905 00:37:38.752380 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.752444 kubelet[2717]: I0905 00:37:38.752402 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f591032-6561-4a75-a88e-cbe4c8ece143-kubelet-dir\") pod \"csi-node-driver-sr49w\" (UID: \"4f591032-6561-4a75-a88e-cbe4c8ece143\") " pod="calico-system/csi-node-driver-sr49w" Sep 5 00:37:38.754379 kubelet[2717]: E0905 00:37:38.754349 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.754379 kubelet[2717]: W0905 00:37:38.754365 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.754502 kubelet[2717]: E0905 00:37:38.754383 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.754679 kubelet[2717]: E0905 00:37:38.754635 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.754679 kubelet[2717]: W0905 00:37:38.754682 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.754991 kubelet[2717]: E0905 00:37:38.754703 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.755036 kubelet[2717]: E0905 00:37:38.755003 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.755036 kubelet[2717]: W0905 00:37:38.755012 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.755036 kubelet[2717]: E0905 00:37:38.755029 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.755252 kubelet[2717]: E0905 00:37:38.755227 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.755252 kubelet[2717]: W0905 00:37:38.755239 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.755252 kubelet[2717]: E0905 00:37:38.755251 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.755592 kubelet[2717]: E0905 00:37:38.755572 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.755592 kubelet[2717]: W0905 00:37:38.755588 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.755699 kubelet[2717]: E0905 00:37:38.755601 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.755844 kubelet[2717]: E0905 00:37:38.755829 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.755844 kubelet[2717]: W0905 00:37:38.755839 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.755955 kubelet[2717]: E0905 00:37:38.755848 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.766789 containerd[1560]: time="2025-09-05T00:37:38.766730725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fz799,Uid:a007fcce-2ad7-451c-808a-48154dcd5f41,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:38.834766 containerd[1560]: time="2025-09-05T00:37:38.834697603Z" level=info msg="connecting to shim e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c" address="unix:///run/containerd/s/295de2a3d6480469c78070dd3888d4fe625d63bf8e34a47181f8fc044a189642" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:38.855567 kubelet[2717]: E0905 00:37:38.855527 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.855567 kubelet[2717]: W0905 00:37:38.855548 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.855567 kubelet[2717]: E0905 00:37:38.855567 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.855798 kubelet[2717]: E0905 00:37:38.855750 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.855798 kubelet[2717]: W0905 00:37:38.855758 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.855798 kubelet[2717]: E0905 00:37:38.855767 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.856804 kubelet[2717]: E0905 00:37:38.856236 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.856804 kubelet[2717]: W0905 00:37:38.856261 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.856804 kubelet[2717]: E0905 00:37:38.856288 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.857170 kubelet[2717]: E0905 00:37:38.857142 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.857170 kubelet[2717]: W0905 00:37:38.857158 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.857170 kubelet[2717]: E0905 00:37:38.857172 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.857418 kubelet[2717]: E0905 00:37:38.857398 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.857418 kubelet[2717]: W0905 00:37:38.857415 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.857529 kubelet[2717]: E0905 00:37:38.857471 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.857934 kubelet[2717]: E0905 00:37:38.857910 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.857934 kubelet[2717]: W0905 00:37:38.857932 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.858050 kubelet[2717]: E0905 00:37:38.858020 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.858348 kubelet[2717]: E0905 00:37:38.858326 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.858348 kubelet[2717]: W0905 00:37:38.858342 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.858476 kubelet[2717]: E0905 00:37:38.858433 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.858776 kubelet[2717]: E0905 00:37:38.858752 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.858776 kubelet[2717]: W0905 00:37:38.858769 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.858875 kubelet[2717]: E0905 00:37:38.858786 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.859082 kubelet[2717]: E0905 00:37:38.859050 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.859082 kubelet[2717]: W0905 00:37:38.859068 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.859168 kubelet[2717]: E0905 00:37:38.859099 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.859407 kubelet[2717]: E0905 00:37:38.859386 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.859407 kubelet[2717]: W0905 00:37:38.859401 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.859486 kubelet[2717]: E0905 00:37:38.859464 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.859725 kubelet[2717]: E0905 00:37:38.859703 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.859725 kubelet[2717]: W0905 00:37:38.859718 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.859830 kubelet[2717]: E0905 00:37:38.859811 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.860186 kubelet[2717]: E0905 00:37:38.860081 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.860186 kubelet[2717]: W0905 00:37:38.860096 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.860186 kubelet[2717]: E0905 00:37:38.860161 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.860091 systemd[1]: Started cri-containerd-e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c.scope - libcontainer container e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c. Sep 5 00:37:38.860632 kubelet[2717]: E0905 00:37:38.860459 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.860632 kubelet[2717]: W0905 00:37:38.860473 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.860632 kubelet[2717]: E0905 00:37:38.860489 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.861071 kubelet[2717]: E0905 00:37:38.861016 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.861071 kubelet[2717]: W0905 00:37:38.861068 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.861211 kubelet[2717]: E0905 00:37:38.861188 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.861599 kubelet[2717]: E0905 00:37:38.861354 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.861599 kubelet[2717]: W0905 00:37:38.861365 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.861599 kubelet[2717]: E0905 00:37:38.861424 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.861692 kubelet[2717]: E0905 00:37:38.861622 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.861692 kubelet[2717]: W0905 00:37:38.861634 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.861778 kubelet[2717]: E0905 00:37:38.861689 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.862669 kubelet[2717]: E0905 00:37:38.861933 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.862669 kubelet[2717]: W0905 00:37:38.861952 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.862669 kubelet[2717]: E0905 00:37:38.862059 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.862669 kubelet[2717]: E0905 00:37:38.862259 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.862669 kubelet[2717]: W0905 00:37:38.862269 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.862669 kubelet[2717]: E0905 00:37:38.862297 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.862669 kubelet[2717]: E0905 00:37:38.862593 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.862669 kubelet[2717]: W0905 00:37:38.862603 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.862669 kubelet[2717]: E0905 00:37:38.862620 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.863934 kubelet[2717]: E0905 00:37:38.863159 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.863934 kubelet[2717]: W0905 00:37:38.863178 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.863934 kubelet[2717]: E0905 00:37:38.863196 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.863934 kubelet[2717]: E0905 00:37:38.863469 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.863934 kubelet[2717]: W0905 00:37:38.863480 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.863934 kubelet[2717]: E0905 00:37:38.863547 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.863934 kubelet[2717]: E0905 00:37:38.863776 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.863934 kubelet[2717]: W0905 00:37:38.863787 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.863934 kubelet[2717]: E0905 00:37:38.863879 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.864378 kubelet[2717]: E0905 00:37:38.864296 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.864378 kubelet[2717]: W0905 00:37:38.864311 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.864574 kubelet[2717]: E0905 00:37:38.864535 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.865400 kubelet[2717]: E0905 00:37:38.865203 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.865400 kubelet[2717]: W0905 00:37:38.865221 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.866439 kubelet[2717]: E0905 00:37:38.866243 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.866657 kubelet[2717]: E0905 00:37:38.866633 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.866657 kubelet[2717]: W0905 00:37:38.866650 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.866657 kubelet[2717]: E0905 00:37:38.866663 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:38.876796 kubelet[2717]: E0905 00:37:38.876756 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:38.876796 kubelet[2717]: W0905 00:37:38.876778 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:38.876796 kubelet[2717]: E0905 00:37:38.876797 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:39.008104 containerd[1560]: time="2025-09-05T00:37:39.007966175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fz799,Uid:a007fcce-2ad7-451c-808a-48154dcd5f41,Namespace:calico-system,Attempt:0,} returns sandbox id \"e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c\"" Sep 5 00:37:40.343659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443769806.mount: Deactivated successfully. Sep 5 00:37:40.537880 kubelet[2717]: E0905 00:37:40.537764 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sr49w" podUID="4f591032-6561-4a75-a88e-cbe4c8ece143" Sep 5 00:37:40.822464 containerd[1560]: time="2025-09-05T00:37:40.822366832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:40.831425 containerd[1560]: time="2025-09-05T00:37:40.831342070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 00:37:40.834438 containerd[1560]: time="2025-09-05T00:37:40.834373560Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:40.837138 containerd[1560]: time="2025-09-05T00:37:40.837097561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:40.838587 containerd[1560]: time="2025-09-05T00:37:40.838527931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.225652828s" Sep 5 00:37:40.838587 containerd[1560]: time="2025-09-05T00:37:40.838580048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:37:40.839660 containerd[1560]: time="2025-09-05T00:37:40.839616682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:37:40.853068 containerd[1560]: time="2025-09-05T00:37:40.852998628Z" level=info msg="CreateContainer within sandbox \"1720e82366072376f340f7c865e5f9ae91f541883de21732cf598c69eae03166\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:37:40.867377 containerd[1560]: time="2025-09-05T00:37:40.867306902Z" level=info msg="Container 1a98c0d98accb20aca8379d99db65120ace41319b979560900d1c46d761f19ec: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:40.879070 containerd[1560]: time="2025-09-05T00:37:40.879001961Z" level=info msg="CreateContainer within sandbox \"1720e82366072376f340f7c865e5f9ae91f541883de21732cf598c69eae03166\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1a98c0d98accb20aca8379d99db65120ace41319b979560900d1c46d761f19ec\"" Sep 5 00:37:40.879655 containerd[1560]: time="2025-09-05T00:37:40.879599466Z" level=info msg="StartContainer for \"1a98c0d98accb20aca8379d99db65120ace41319b979560900d1c46d761f19ec\"" Sep 5 00:37:40.881152 containerd[1560]: time="2025-09-05T00:37:40.881096032Z" level=info msg="connecting to shim 1a98c0d98accb20aca8379d99db65120ace41319b979560900d1c46d761f19ec" address="unix:///run/containerd/s/4f5255a131debbc813a542d67f5d465989f8bc28750c4ce0fea248aef05eace6" protocol=ttrpc version=3 Sep 5 00:37:40.919388 systemd[1]: Started cri-containerd-1a98c0d98accb20aca8379d99db65120ace41319b979560900d1c46d761f19ec.scope - libcontainer container 1a98c0d98accb20aca8379d99db65120ace41319b979560900d1c46d761f19ec. Sep 5 00:37:40.983667 containerd[1560]: time="2025-09-05T00:37:40.983495812Z" level=info msg="StartContainer for \"1a98c0d98accb20aca8379d99db65120ace41319b979560900d1c46d761f19ec\" returns successfully" Sep 5 00:37:41.612414 kubelet[2717]: E0905 00:37:41.612372 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:41.666690 kubelet[2717]: E0905 00:37:41.666629 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.666690 kubelet[2717]: W0905 00:37:41.666649 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.666690 kubelet[2717]: E0905 00:37:41.666667 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.667021 kubelet[2717]: E0905 00:37:41.666932 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.667021 kubelet[2717]: W0905 00:37:41.666943 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.667021 kubelet[2717]: E0905 00:37:41.666953 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.667200 kubelet[2717]: E0905 00:37:41.667175 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.667200 kubelet[2717]: W0905 00:37:41.667188 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.667200 kubelet[2717]: E0905 00:37:41.667196 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.667388 kubelet[2717]: E0905 00:37:41.667365 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.667388 kubelet[2717]: W0905 00:37:41.667380 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.667457 kubelet[2717]: E0905 00:37:41.667390 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.667649 kubelet[2717]: E0905 00:37:41.667593 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.667649 kubelet[2717]: W0905 00:37:41.667610 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.667649 kubelet[2717]: E0905 00:37:41.667620 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.667834 kubelet[2717]: E0905 00:37:41.667811 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.667834 kubelet[2717]: W0905 00:37:41.667826 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.667834 kubelet[2717]: E0905 00:37:41.667835 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.668090 kubelet[2717]: E0905 00:37:41.668065 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.668090 kubelet[2717]: W0905 00:37:41.668079 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.668090 kubelet[2717]: E0905 00:37:41.668091 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.668311 kubelet[2717]: E0905 00:37:41.668286 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.668311 kubelet[2717]: W0905 00:37:41.668300 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.668311 kubelet[2717]: E0905 00:37:41.668309 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.668516 kubelet[2717]: E0905 00:37:41.668493 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.668516 kubelet[2717]: W0905 00:37:41.668507 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.668516 kubelet[2717]: E0905 00:37:41.668516 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.668710 kubelet[2717]: E0905 00:37:41.668687 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.668710 kubelet[2717]: W0905 00:37:41.668701 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.668710 kubelet[2717]: E0905 00:37:41.668710 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.668918 kubelet[2717]: E0905 00:37:41.668881 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.668918 kubelet[2717]: W0905 00:37:41.668909 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.669007 kubelet[2717]: E0905 00:37:41.668920 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.669104 kubelet[2717]: E0905 00:37:41.669082 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.669104 kubelet[2717]: W0905 00:37:41.669095 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.669104 kubelet[2717]: E0905 00:37:41.669104 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.669336 kubelet[2717]: E0905 00:37:41.669311 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.669336 kubelet[2717]: W0905 00:37:41.669326 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.669336 kubelet[2717]: E0905 00:37:41.669336 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.669515 kubelet[2717]: E0905 00:37:41.669494 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.669515 kubelet[2717]: W0905 00:37:41.669507 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.669515 kubelet[2717]: E0905 00:37:41.669516 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.669736 kubelet[2717]: E0905 00:37:41.669705 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.669736 kubelet[2717]: W0905 00:37:41.669719 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.669736 kubelet[2717]: E0905 00:37:41.669730 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.679325 kubelet[2717]: E0905 00:37:41.679298 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.679325 kubelet[2717]: W0905 00:37:41.679313 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.679325 kubelet[2717]: E0905 00:37:41.679326 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.679574 kubelet[2717]: E0905 00:37:41.679545 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.679574 kubelet[2717]: W0905 00:37:41.679558 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.679574 kubelet[2717]: E0905 00:37:41.679574 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.679870 kubelet[2717]: E0905 00:37:41.679841 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.679870 kubelet[2717]: W0905 00:37:41.679854 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.679988 kubelet[2717]: E0905 00:37:41.679871 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.680187 kubelet[2717]: E0905 00:37:41.680138 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.680187 kubelet[2717]: W0905 00:37:41.680179 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.680254 kubelet[2717]: E0905 00:37:41.680207 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.680458 kubelet[2717]: E0905 00:37:41.680438 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.680458 kubelet[2717]: W0905 00:37:41.680452 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.680537 kubelet[2717]: E0905 00:37:41.680470 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.680708 kubelet[2717]: E0905 00:37:41.680687 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.680708 kubelet[2717]: W0905 00:37:41.680702 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.680798 kubelet[2717]: E0905 00:37:41.680721 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.680968 kubelet[2717]: E0905 00:37:41.680945 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.680968 kubelet[2717]: W0905 00:37:41.680958 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.681056 kubelet[2717]: E0905 00:37:41.680993 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.681176 kubelet[2717]: E0905 00:37:41.681151 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.681176 kubelet[2717]: W0905 00:37:41.681164 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.681267 kubelet[2717]: E0905 00:37:41.681193 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.681369 kubelet[2717]: E0905 00:37:41.681346 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.681369 kubelet[2717]: W0905 00:37:41.681357 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.681441 kubelet[2717]: E0905 00:37:41.681374 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.681639 kubelet[2717]: E0905 00:37:41.681618 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.681639 kubelet[2717]: W0905 00:37:41.681634 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.681856 kubelet[2717]: E0905 00:37:41.681654 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.681911 kubelet[2717]: E0905 00:37:41.681857 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.681911 kubelet[2717]: W0905 00:37:41.681867 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.681911 kubelet[2717]: E0905 00:37:41.681883 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.682167 kubelet[2717]: E0905 00:37:41.682133 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.682167 kubelet[2717]: W0905 00:37:41.682160 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.682250 kubelet[2717]: E0905 00:37:41.682176 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.682413 kubelet[2717]: E0905 00:37:41.682392 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.682413 kubelet[2717]: W0905 00:37:41.682409 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.682500 kubelet[2717]: E0905 00:37:41.682426 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.682672 kubelet[2717]: E0905 00:37:41.682642 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.682672 kubelet[2717]: W0905 00:37:41.682657 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.682672 kubelet[2717]: E0905 00:37:41.682674 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.682973 kubelet[2717]: E0905 00:37:41.682954 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.682973 kubelet[2717]: W0905 00:37:41.682968 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.683049 kubelet[2717]: E0905 00:37:41.682982 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.683208 kubelet[2717]: E0905 00:37:41.683189 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.683208 kubelet[2717]: W0905 00:37:41.683201 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.683208 kubelet[2717]: E0905 00:37:41.683210 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.683407 kubelet[2717]: E0905 00:37:41.683391 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.683407 kubelet[2717]: W0905 00:37:41.683401 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.683476 kubelet[2717]: E0905 00:37:41.683409 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:41.683764 kubelet[2717]: E0905 00:37:41.683748 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:41.683764 kubelet[2717]: W0905 00:37:41.683758 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:41.683840 kubelet[2717]: E0905 00:37:41.683766 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.537445 kubelet[2717]: E0905 00:37:42.537381 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sr49w" podUID="4f591032-6561-4a75-a88e-cbe4c8ece143" Sep 5 00:37:42.588209 containerd[1560]: time="2025-09-05T00:37:42.588153483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:42.589780 containerd[1560]: time="2025-09-05T00:37:42.589737437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 00:37:42.591070 containerd[1560]: time="2025-09-05T00:37:42.591018095Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:42.592930 containerd[1560]: time="2025-09-05T00:37:42.592878501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:42.593448 containerd[1560]: time="2025-09-05T00:37:42.593398269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.753749281s" Sep 5 00:37:42.593448 containerd[1560]: time="2025-09-05T00:37:42.593439754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:37:42.596056 containerd[1560]: time="2025-09-05T00:37:42.595652649Z" level=info msg="CreateContainer within sandbox \"e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:37:42.605999 containerd[1560]: time="2025-09-05T00:37:42.605941562Z" level=info msg="Container d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:42.613199 kubelet[2717]: I0905 00:37:42.613142 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:37:42.613593 kubelet[2717]: E0905 00:37:42.613514 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:42.620487 containerd[1560]: time="2025-09-05T00:37:42.620423219Z" level=info msg="CreateContainer within sandbox \"e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604\"" Sep 5 00:37:42.621046 containerd[1560]: time="2025-09-05T00:37:42.620994963Z" level=info msg="StartContainer for \"d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604\"" Sep 5 00:37:42.622420 containerd[1560]: time="2025-09-05T00:37:42.622392056Z" level=info msg="connecting to shim d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604" address="unix:///run/containerd/s/295de2a3d6480469c78070dd3888d4fe625d63bf8e34a47181f8fc044a189642" protocol=ttrpc version=3 Sep 5 00:37:42.663140 systemd[1]: Started cri-containerd-d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604.scope - libcontainer container d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604. Sep 5 00:37:42.675478 kubelet[2717]: E0905 00:37:42.675429 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.675478 kubelet[2717]: W0905 00:37:42.675485 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.675722 kubelet[2717]: E0905 00:37:42.675513 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.676426 kubelet[2717]: E0905 00:37:42.675810 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.676426 kubelet[2717]: W0905 00:37:42.675847 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.676426 kubelet[2717]: E0905 00:37:42.675860 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.676426 kubelet[2717]: E0905 00:37:42.676164 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.676426 kubelet[2717]: W0905 00:37:42.676199 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.676426 kubelet[2717]: E0905 00:37:42.676212 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.676916 kubelet[2717]: E0905 00:37:42.676540 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.676916 kubelet[2717]: W0905 00:37:42.676552 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.676916 kubelet[2717]: E0905 00:37:42.676564 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.677137 kubelet[2717]: E0905 00:37:42.677106 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.677230 kubelet[2717]: W0905 00:37:42.677213 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.677401 kubelet[2717]: E0905 00:37:42.677315 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.677695 kubelet[2717]: E0905 00:37:42.677680 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.677797 kubelet[2717]: W0905 00:37:42.677779 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.677982 kubelet[2717]: E0905 00:37:42.677866 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.678446 kubelet[2717]: E0905 00:37:42.678373 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.678446 kubelet[2717]: W0905 00:37:42.678390 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.678446 kubelet[2717]: E0905 00:37:42.678402 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.678843 kubelet[2717]: E0905 00:37:42.678812 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.678958 kubelet[2717]: W0905 00:37:42.678940 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.679121 kubelet[2717]: E0905 00:37:42.679027 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.679640 kubelet[2717]: E0905 00:37:42.679533 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.679640 kubelet[2717]: W0905 00:37:42.679548 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.679640 kubelet[2717]: E0905 00:37:42.679560 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.680027 kubelet[2717]: E0905 00:37:42.680010 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.680225 kubelet[2717]: W0905 00:37:42.680099 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.680225 kubelet[2717]: E0905 00:37:42.680117 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.680497 kubelet[2717]: E0905 00:37:42.680481 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.680700 kubelet[2717]: W0905 00:37:42.680557 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.680700 kubelet[2717]: E0905 00:37:42.680575 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.680861 kubelet[2717]: E0905 00:37:42.680844 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.681090 kubelet[2717]: W0905 00:37:42.680945 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.681090 kubelet[2717]: E0905 00:37:42.680964 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.681289 kubelet[2717]: E0905 00:37:42.681262 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.681439 kubelet[2717]: W0905 00:37:42.681360 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.681506 kubelet[2717]: E0905 00:37:42.681379 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.681942 kubelet[2717]: E0905 00:37:42.681771 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.681942 kubelet[2717]: W0905 00:37:42.681786 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.681942 kubelet[2717]: E0905 00:37:42.681797 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.682163 kubelet[2717]: E0905 00:37:42.682146 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.682247 kubelet[2717]: W0905 00:37:42.682230 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.682339 kubelet[2717]: E0905 00:37:42.682323 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.685607 kubelet[2717]: E0905 00:37:42.685575 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.685607 kubelet[2717]: W0905 00:37:42.685599 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.685754 kubelet[2717]: E0905 00:37:42.685624 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.685901 kubelet[2717]: E0905 00:37:42.685869 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.685901 kubelet[2717]: W0905 00:37:42.685880 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.686003 kubelet[2717]: E0905 00:37:42.685917 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.686316 kubelet[2717]: E0905 00:37:42.686272 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.686316 kubelet[2717]: W0905 00:37:42.686311 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.686410 kubelet[2717]: E0905 00:37:42.686340 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.686567 kubelet[2717]: E0905 00:37:42.686542 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.686567 kubelet[2717]: W0905 00:37:42.686563 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.686642 kubelet[2717]: E0905 00:37:42.686581 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.686787 kubelet[2717]: E0905 00:37:42.686761 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.686787 kubelet[2717]: W0905 00:37:42.686777 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.686787 kubelet[2717]: E0905 00:37:42.686792 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.687040 kubelet[2717]: E0905 00:37:42.687014 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.687040 kubelet[2717]: W0905 00:37:42.687034 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.687126 kubelet[2717]: E0905 00:37:42.687048 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.687369 kubelet[2717]: E0905 00:37:42.687347 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.687369 kubelet[2717]: W0905 00:37:42.687364 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.687458 kubelet[2717]: E0905 00:37:42.687382 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.687555 kubelet[2717]: E0905 00:37:42.687535 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.687555 kubelet[2717]: W0905 00:37:42.687548 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.687647 kubelet[2717]: E0905 00:37:42.687564 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.687767 kubelet[2717]: E0905 00:37:42.687748 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.687767 kubelet[2717]: W0905 00:37:42.687759 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.687860 kubelet[2717]: E0905 00:37:42.687775 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.687995 kubelet[2717]: E0905 00:37:42.687973 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.687995 kubelet[2717]: W0905 00:37:42.687988 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.688174 kubelet[2717]: E0905 00:37:42.688040 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.688174 kubelet[2717]: E0905 00:37:42.688167 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.688174 kubelet[2717]: W0905 00:37:42.688176 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.688296 kubelet[2717]: E0905 00:37:42.688190 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.688407 kubelet[2717]: E0905 00:37:42.688384 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.688407 kubelet[2717]: W0905 00:37:42.688396 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.688407 kubelet[2717]: E0905 00:37:42.688412 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.688622 kubelet[2717]: E0905 00:37:42.688595 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.688622 kubelet[2717]: W0905 00:37:42.688603 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.688622 kubelet[2717]: E0905 00:37:42.688614 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.688862 kubelet[2717]: E0905 00:37:42.688840 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.688862 kubelet[2717]: W0905 00:37:42.688853 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.688996 kubelet[2717]: E0905 00:37:42.688867 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.689207 kubelet[2717]: E0905 00:37:42.689185 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.689207 kubelet[2717]: W0905 00:37:42.689197 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.689372 kubelet[2717]: E0905 00:37:42.689296 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.689434 kubelet[2717]: E0905 00:37:42.689398 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.689434 kubelet[2717]: W0905 00:37:42.689405 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.689434 kubelet[2717]: E0905 00:37:42.689413 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.689627 kubelet[2717]: E0905 00:37:42.689609 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.689627 kubelet[2717]: W0905 00:37:42.689621 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.689627 kubelet[2717]: E0905 00:37:42.689628 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.690001 kubelet[2717]: E0905 00:37:42.689981 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:42.690001 kubelet[2717]: W0905 00:37:42.689993 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:42.690001 kubelet[2717]: E0905 00:37:42.690002 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:42.723789 systemd[1]: cri-containerd-d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604.scope: Deactivated successfully. Sep 5 00:37:42.724500 systemd[1]: cri-containerd-d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604.scope: Consumed 43ms CPU time, 6.4M memory peak, 4.5M written to disk. Sep 5 00:37:42.726353 containerd[1560]: time="2025-09-05T00:37:42.726315283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604\" id:\"d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604\" pid:3458 exited_at:{seconds:1757032662 nanos:725788872}" Sep 5 00:37:42.914171 containerd[1560]: time="2025-09-05T00:37:42.913974637Z" level=info msg="received exit event container_id:\"d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604\" id:\"d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604\" pid:3458 exited_at:{seconds:1757032662 nanos:725788872}" Sep 5 00:37:42.926533 containerd[1560]: time="2025-09-05T00:37:42.926379226Z" level=info msg="StartContainer for \"d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604\" returns successfully" Sep 5 00:37:42.940091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8bd030e80eb705784582858cbe53373c73a8ee12263527489087822268f0604-rootfs.mount: Deactivated successfully. Sep 5 00:37:43.619028 containerd[1560]: time="2025-09-05T00:37:43.618843649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:37:43.635105 kubelet[2717]: I0905 00:37:43.634947 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64c78498f9-w4k8w" podStartSLOduration=3.407851129 podStartE2EDuration="5.634926129s" podCreationTimestamp="2025-09-05 00:37:38 +0000 UTC" firstStartedPulling="2025-09-05 00:37:38.612401415 +0000 UTC m=+20.184724328" lastFinishedPulling="2025-09-05 00:37:40.839476395 +0000 UTC m=+22.411799328" observedRunningTime="2025-09-05 00:37:41.627210848 +0000 UTC m=+23.199533762" watchObservedRunningTime="2025-09-05 00:37:43.634926129 +0000 UTC m=+25.207249062" Sep 5 00:37:44.537380 kubelet[2717]: E0905 00:37:44.537306 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sr49w" podUID="4f591032-6561-4a75-a88e-cbe4c8ece143" Sep 5 00:37:46.537755 kubelet[2717]: E0905 00:37:46.537619 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sr49w" podUID="4f591032-6561-4a75-a88e-cbe4c8ece143" Sep 5 00:37:48.560421 kubelet[2717]: E0905 00:37:48.560345 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sr49w" podUID="4f591032-6561-4a75-a88e-cbe4c8ece143" Sep 5 00:37:48.697291 containerd[1560]: time="2025-09-05T00:37:48.697238422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:48.698164 containerd[1560]: time="2025-09-05T00:37:48.698129078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:37:48.699379 containerd[1560]: time="2025-09-05T00:37:48.699333993Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:48.701370 containerd[1560]: time="2025-09-05T00:37:48.701329955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:48.702086 containerd[1560]: time="2025-09-05T00:37:48.702056902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.08317685s" Sep 5 00:37:48.702086 containerd[1560]: time="2025-09-05T00:37:48.702084436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:37:48.703992 containerd[1560]: time="2025-09-05T00:37:48.703928974Z" level=info msg="CreateContainer within sandbox \"e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:37:48.714105 containerd[1560]: time="2025-09-05T00:37:48.714058812Z" level=info msg="Container 00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:48.724583 containerd[1560]: time="2025-09-05T00:37:48.724531778Z" level=info msg="CreateContainer within sandbox \"e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1\"" Sep 5 00:37:48.725945 containerd[1560]: time="2025-09-05T00:37:48.725065808Z" level=info msg="StartContainer for \"00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1\"" Sep 5 00:37:48.726769 containerd[1560]: time="2025-09-05T00:37:48.726736206Z" level=info msg="connecting to shim 00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1" address="unix:///run/containerd/s/295de2a3d6480469c78070dd3888d4fe625d63bf8e34a47181f8fc044a189642" protocol=ttrpc version=3 Sep 5 00:37:48.763140 systemd[1]: Started cri-containerd-00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1.scope - libcontainer container 00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1. Sep 5 00:37:48.814626 containerd[1560]: time="2025-09-05T00:37:48.814512713Z" level=info msg="StartContainer for \"00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1\" returns successfully" Sep 5 00:37:49.991863 containerd[1560]: time="2025-09-05T00:37:49.991798096Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:37:49.995141 systemd[1]: cri-containerd-00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1.scope: Deactivated successfully. Sep 5 00:37:49.995552 systemd[1]: cri-containerd-00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1.scope: Consumed 632ms CPU time, 180.1M memory peak, 2.5M read from disk, 171.3M written to disk. Sep 5 00:37:49.996017 containerd[1560]: time="2025-09-05T00:37:49.995964121Z" level=info msg="received exit event container_id:\"00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1\" id:\"00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1\" pid:3553 exited_at:{seconds:1757032669 nanos:995660363}" Sep 5 00:37:49.996222 containerd[1560]: time="2025-09-05T00:37:49.996167238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1\" id:\"00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1\" pid:3553 exited_at:{seconds:1757032669 nanos:995660363}" Sep 5 00:37:50.019162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00b3922d7464396b10742dcaa5d92c21620ebb2dcf8513ed8bd2a784748b1fb1-rootfs.mount: Deactivated successfully. Sep 5 00:37:50.044918 kubelet[2717]: I0905 00:37:50.044078 2717 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 00:37:50.185531 systemd[1]: Created slice kubepods-burstable-pod6e23c2dc_ebba_4001_ae59_3bafc36e6807.slice - libcontainer container kubepods-burstable-pod6e23c2dc_ebba_4001_ae59_3bafc36e6807.slice. Sep 5 00:37:50.220883 systemd[1]: Created slice kubepods-besteffort-pod88cce520_c0f7_4ac8_8198_0a3910b138e3.slice - libcontainer container kubepods-besteffort-pod88cce520_c0f7_4ac8_8198_0a3910b138e3.slice. Sep 5 00:37:50.228342 systemd[1]: Created slice kubepods-burstable-podfc7981ab_0414_4dff_966f_9c3d937d39c7.slice - libcontainer container kubepods-burstable-podfc7981ab_0414_4dff_966f_9c3d937d39c7.slice. Sep 5 00:37:50.234580 systemd[1]: Created slice kubepods-besteffort-podbf0a13b6_b355_41a8_a285_c37ee53d62ed.slice - libcontainer container kubepods-besteffort-podbf0a13b6_b355_41a8_a285_c37ee53d62ed.slice. Sep 5 00:37:50.240258 systemd[1]: Created slice kubepods-besteffort-pod9bc6b01a_afd8_4e16_a565_1883e0fbea24.slice - libcontainer container kubepods-besteffort-pod9bc6b01a_afd8_4e16_a565_1883e0fbea24.slice. Sep 5 00:37:50.245777 systemd[1]: Created slice kubepods-besteffort-pod8ead0dd3_08b4_47ab_8942_ea53d5aa41ef.slice - libcontainer container kubepods-besteffort-pod8ead0dd3_08b4_47ab_8942_ea53d5aa41ef.slice. Sep 5 00:37:50.249523 systemd[1]: Created slice kubepods-besteffort-pode3f5473f_8e14_474e_97d3_d029001dc5fc.slice - libcontainer container kubepods-besteffort-pode3f5473f_8e14_474e_97d3_d029001dc5fc.slice. Sep 5 00:37:50.331474 kubelet[2717]: I0905 00:37:50.331396 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-whisker-backend-key-pair\") pod \"whisker-6f9fcc475b-2cw2h\" (UID: \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\") " pod="calico-system/whisker-6f9fcc475b-2cw2h" Sep 5 00:37:50.331655 kubelet[2717]: I0905 00:37:50.331554 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc7981ab-0414-4dff-966f-9c3d937d39c7-config-volume\") pod \"coredns-7c65d6cfc9-g7xpz\" (UID: \"fc7981ab-0414-4dff-966f-9c3d937d39c7\") " pod="kube-system/coredns-7c65d6cfc9-g7xpz" Sep 5 00:37:50.331655 kubelet[2717]: I0905 00:37:50.331616 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-whisker-ca-bundle\") pod \"whisker-6f9fcc475b-2cw2h\" (UID: \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\") " pod="calico-system/whisker-6f9fcc475b-2cw2h" Sep 5 00:37:50.331655 kubelet[2717]: I0905 00:37:50.331646 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z8gf\" (UniqueName: \"kubernetes.io/projected/fc7981ab-0414-4dff-966f-9c3d937d39c7-kube-api-access-9z8gf\") pod \"coredns-7c65d6cfc9-g7xpz\" (UID: \"fc7981ab-0414-4dff-966f-9c3d937d39c7\") " pod="kube-system/coredns-7c65d6cfc9-g7xpz" Sep 5 00:37:50.331741 kubelet[2717]: I0905 00:37:50.331664 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e23c2dc-ebba-4001-ae59-3bafc36e6807-config-volume\") pod \"coredns-7c65d6cfc9-mkvsp\" (UID: \"6e23c2dc-ebba-4001-ae59-3bafc36e6807\") " pod="kube-system/coredns-7c65d6cfc9-mkvsp" Sep 5 00:37:50.331741 kubelet[2717]: I0905 00:37:50.331686 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f5473f-8e14-474e-97d3-d029001dc5fc-goldmane-ca-bundle\") pod \"goldmane-7988f88666-vmwjg\" (UID: \"e3f5473f-8e14-474e-97d3-d029001dc5fc\") " pod="calico-system/goldmane-7988f88666-vmwjg" Sep 5 00:37:50.331741 kubelet[2717]: I0905 00:37:50.331704 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e3f5473f-8e14-474e-97d3-d029001dc5fc-goldmane-key-pair\") pod \"goldmane-7988f88666-vmwjg\" (UID: \"e3f5473f-8e14-474e-97d3-d029001dc5fc\") " pod="calico-system/goldmane-7988f88666-vmwjg" Sep 5 00:37:50.331810 kubelet[2717]: I0905 00:37:50.331751 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4dmj\" (UniqueName: \"kubernetes.io/projected/88cce520-c0f7-4ac8-8198-0a3910b138e3-kube-api-access-n4dmj\") pod \"calico-kube-controllers-54c46b8858-djlns\" (UID: \"88cce520-c0f7-4ac8-8198-0a3910b138e3\") " pod="calico-system/calico-kube-controllers-54c46b8858-djlns" Sep 5 00:37:50.331810 kubelet[2717]: I0905 00:37:50.331784 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf0a13b6-b355-41a8-a285-c37ee53d62ed-calico-apiserver-certs\") pod \"calico-apiserver-55cb865578-lg8kr\" (UID: \"bf0a13b6-b355-41a8-a285-c37ee53d62ed\") " pod="calico-apiserver/calico-apiserver-55cb865578-lg8kr" Sep 5 00:37:50.331810 kubelet[2717]: I0905 00:37:50.331807 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj48g\" (UniqueName: \"kubernetes.io/projected/e3f5473f-8e14-474e-97d3-d029001dc5fc-kube-api-access-bj48g\") pod \"goldmane-7988f88666-vmwjg\" (UID: \"e3f5473f-8e14-474e-97d3-d029001dc5fc\") " pod="calico-system/goldmane-7988f88666-vmwjg" Sep 5 00:37:50.331879 kubelet[2717]: I0905 00:37:50.331823 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wns4\" (UniqueName: \"kubernetes.io/projected/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-kube-api-access-7wns4\") pod \"whisker-6f9fcc475b-2cw2h\" (UID: \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\") " pod="calico-system/whisker-6f9fcc475b-2cw2h" Sep 5 00:37:50.331879 kubelet[2717]: I0905 00:37:50.331849 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88cce520-c0f7-4ac8-8198-0a3910b138e3-tigera-ca-bundle\") pod \"calico-kube-controllers-54c46b8858-djlns\" (UID: \"88cce520-c0f7-4ac8-8198-0a3910b138e3\") " pod="calico-system/calico-kube-controllers-54c46b8858-djlns" Sep 5 00:37:50.331969 kubelet[2717]: I0905 00:37:50.331866 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdv7\" (UniqueName: \"kubernetes.io/projected/bf0a13b6-b355-41a8-a285-c37ee53d62ed-kube-api-access-5pdv7\") pod \"calico-apiserver-55cb865578-lg8kr\" (UID: \"bf0a13b6-b355-41a8-a285-c37ee53d62ed\") " pod="calico-apiserver/calico-apiserver-55cb865578-lg8kr" Sep 5 00:37:50.331969 kubelet[2717]: I0905 00:37:50.331956 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bc6b01a-afd8-4e16-a565-1883e0fbea24-calico-apiserver-certs\") pod \"calico-apiserver-55cb865578-qrpwz\" (UID: \"9bc6b01a-afd8-4e16-a565-1883e0fbea24\") " pod="calico-apiserver/calico-apiserver-55cb865578-qrpwz" Sep 5 00:37:50.332029 kubelet[2717]: I0905 00:37:50.331976 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f5473f-8e14-474e-97d3-d029001dc5fc-config\") pod \"goldmane-7988f88666-vmwjg\" (UID: \"e3f5473f-8e14-474e-97d3-d029001dc5fc\") " pod="calico-system/goldmane-7988f88666-vmwjg" Sep 5 00:37:50.332029 kubelet[2717]: I0905 00:37:50.332000 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjbl\" (UniqueName: \"kubernetes.io/projected/6e23c2dc-ebba-4001-ae59-3bafc36e6807-kube-api-access-4mjbl\") pod \"coredns-7c65d6cfc9-mkvsp\" (UID: \"6e23c2dc-ebba-4001-ae59-3bafc36e6807\") " pod="kube-system/coredns-7c65d6cfc9-mkvsp" Sep 5 00:37:50.332029 kubelet[2717]: I0905 00:37:50.332024 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtr47\" (UniqueName: \"kubernetes.io/projected/9bc6b01a-afd8-4e16-a565-1883e0fbea24-kube-api-access-rtr47\") pod \"calico-apiserver-55cb865578-qrpwz\" (UID: \"9bc6b01a-afd8-4e16-a565-1883e0fbea24\") " pod="calico-apiserver/calico-apiserver-55cb865578-qrpwz" Sep 5 00:37:50.489343 kubelet[2717]: E0905 00:37:50.489296 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:50.490029 containerd[1560]: time="2025-09-05T00:37:50.489966061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mkvsp,Uid:6e23c2dc-ebba-4001-ae59-3bafc36e6807,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:50.525443 containerd[1560]: time="2025-09-05T00:37:50.525305859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c46b8858-djlns,Uid:88cce520-c0f7-4ac8-8198-0a3910b138e3,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:50.532581 kubelet[2717]: E0905 00:37:50.532492 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:50.533181 containerd[1560]: time="2025-09-05T00:37:50.533125472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7xpz,Uid:fc7981ab-0414-4dff-966f-9c3d937d39c7,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:50.539312 containerd[1560]: time="2025-09-05T00:37:50.539226235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cb865578-lg8kr,Uid:bf0a13b6-b355-41a8-a285-c37ee53d62ed,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:37:50.544098 containerd[1560]: time="2025-09-05T00:37:50.544058023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cb865578-qrpwz,Uid:9bc6b01a-afd8-4e16-a565-1883e0fbea24,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:37:50.548740 systemd[1]: Created slice kubepods-besteffort-pod4f591032_6561_4a75_a88e_cbe4c8ece143.slice - libcontainer container kubepods-besteffort-pod4f591032_6561_4a75_a88e_cbe4c8ece143.slice. Sep 5 00:37:50.551878 containerd[1560]: time="2025-09-05T00:37:50.551844251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9fcc475b-2cw2h,Uid:8ead0dd3-08b4-47ab-8942-ea53d5aa41ef,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:50.555517 containerd[1560]: time="2025-09-05T00:37:50.555358749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-vmwjg,Uid:e3f5473f-8e14-474e-97d3-d029001dc5fc,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:50.557693 containerd[1560]: time="2025-09-05T00:37:50.557644232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sr49w,Uid:4f591032-6561-4a75-a88e-cbe4c8ece143,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:50.641081 containerd[1560]: time="2025-09-05T00:37:50.640341786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:37:50.712744 containerd[1560]: time="2025-09-05T00:37:50.712663740Z" level=error msg="Failed to destroy network for sandbox \"9dae2a94b3bbd5dbc25a9985cb402134a76a6309cbb3e1cce82aeb77fd01b769\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.720587 containerd[1560]: time="2025-09-05T00:37:50.720521981Z" level=error msg="Failed to destroy network for sandbox \"ef7a47c4b99549324b6cf342277f46a4c6b25c424222a79c65645f8bd902c62f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.733833 containerd[1560]: time="2025-09-05T00:37:50.733749063Z" level=error msg="Failed to destroy network for sandbox \"199be4929145032ee2bbbfc46990319344487e8765c8a779d994c0527268eda6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.741963 containerd[1560]: time="2025-09-05T00:37:50.741880531Z" level=error msg="Failed to destroy network for sandbox \"6fd82addadce11f886be1d621689a34437168a69564462978118c0f87967296b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.743587 containerd[1560]: time="2025-09-05T00:37:50.743543081Z" level=error msg="Failed to destroy network for sandbox \"6216c82c67816c30d07447aa090344e96196dc760da5233fdd9477f30e21efd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.744596 containerd[1560]: time="2025-09-05T00:37:50.744550903Z" level=error msg="Failed to destroy network for sandbox \"f998215ce809cf52342d83f075b50685953c72c7ea39aac5cf13f5b452acdf0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.744749 containerd[1560]: time="2025-09-05T00:37:50.744719379Z" level=error msg="Failed to destroy network for sandbox \"8217536535aaa5dc0118b4624b51974ba7b7d6703808dfee882688e7ad810f0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.751314 containerd[1560]: time="2025-09-05T00:37:50.751259629Z" level=error msg="Failed to destroy network for sandbox \"2b1abb86a2742caaadc6882a26acdfba06d2c1bc453090702e4c88548b983007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.858662 containerd[1560]: time="2025-09-05T00:37:50.858473437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9fcc475b-2cw2h,Uid:8ead0dd3-08b4-47ab-8942-ea53d5aa41ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dae2a94b3bbd5dbc25a9985cb402134a76a6309cbb3e1cce82aeb77fd01b769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.868574 kubelet[2717]: E0905 00:37:50.868517 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dae2a94b3bbd5dbc25a9985cb402134a76a6309cbb3e1cce82aeb77fd01b769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.868692 kubelet[2717]: E0905 00:37:50.868590 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dae2a94b3bbd5dbc25a9985cb402134a76a6309cbb3e1cce82aeb77fd01b769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f9fcc475b-2cw2h" Sep 5 00:37:50.868692 kubelet[2717]: E0905 00:37:50.868612 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dae2a94b3bbd5dbc25a9985cb402134a76a6309cbb3e1cce82aeb77fd01b769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f9fcc475b-2cw2h" Sep 5 00:37:50.868692 kubelet[2717]: E0905 00:37:50.868657 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f9fcc475b-2cw2h_calico-system(8ead0dd3-08b4-47ab-8942-ea53d5aa41ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f9fcc475b-2cw2h_calico-system(8ead0dd3-08b4-47ab-8942-ea53d5aa41ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dae2a94b3bbd5dbc25a9985cb402134a76a6309cbb3e1cce82aeb77fd01b769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f9fcc475b-2cw2h" podUID="8ead0dd3-08b4-47ab-8942-ea53d5aa41ef" Sep 5 00:37:50.921559 containerd[1560]: time="2025-09-05T00:37:50.921475832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cb865578-lg8kr,Uid:bf0a13b6-b355-41a8-a285-c37ee53d62ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7a47c4b99549324b6cf342277f46a4c6b25c424222a79c65645f8bd902c62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.921775 kubelet[2717]: E0905 00:37:50.921718 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7a47c4b99549324b6cf342277f46a4c6b25c424222a79c65645f8bd902c62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.921826 kubelet[2717]: E0905 00:37:50.921798 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7a47c4b99549324b6cf342277f46a4c6b25c424222a79c65645f8bd902c62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55cb865578-lg8kr" Sep 5 00:37:50.921867 kubelet[2717]: E0905 00:37:50.921823 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7a47c4b99549324b6cf342277f46a4c6b25c424222a79c65645f8bd902c62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55cb865578-lg8kr" Sep 5 00:37:50.921929 kubelet[2717]: E0905 00:37:50.921880 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55cb865578-lg8kr_calico-apiserver(bf0a13b6-b355-41a8-a285-c37ee53d62ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55cb865578-lg8kr_calico-apiserver(bf0a13b6-b355-41a8-a285-c37ee53d62ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef7a47c4b99549324b6cf342277f46a4c6b25c424222a79c65645f8bd902c62f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55cb865578-lg8kr" podUID="bf0a13b6-b355-41a8-a285-c37ee53d62ed" Sep 5 00:37:50.923125 containerd[1560]: time="2025-09-05T00:37:50.922988363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7xpz,Uid:fc7981ab-0414-4dff-966f-9c3d937d39c7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"199be4929145032ee2bbbfc46990319344487e8765c8a779d994c0527268eda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.923305 kubelet[2717]: E0905 00:37:50.923275 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199be4929145032ee2bbbfc46990319344487e8765c8a779d994c0527268eda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.923367 kubelet[2717]: E0905 00:37:50.923314 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199be4929145032ee2bbbfc46990319344487e8765c8a779d994c0527268eda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g7xpz" Sep 5 00:37:50.923367 kubelet[2717]: E0905 00:37:50.923336 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199be4929145032ee2bbbfc46990319344487e8765c8a779d994c0527268eda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g7xpz" Sep 5 00:37:50.923428 kubelet[2717]: E0905 00:37:50.923373 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-g7xpz_kube-system(fc7981ab-0414-4dff-966f-9c3d937d39c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-g7xpz_kube-system(fc7981ab-0414-4dff-966f-9c3d937d39c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"199be4929145032ee2bbbfc46990319344487e8765c8a779d994c0527268eda6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g7xpz" podUID="fc7981ab-0414-4dff-966f-9c3d937d39c7" Sep 5 00:37:50.924991 containerd[1560]: time="2025-09-05T00:37:50.924941802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-vmwjg,Uid:e3f5473f-8e14-474e-97d3-d029001dc5fc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fd82addadce11f886be1d621689a34437168a69564462978118c0f87967296b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.925188 kubelet[2717]: E0905 00:37:50.925101 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fd82addadce11f886be1d621689a34437168a69564462978118c0f87967296b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.925188 kubelet[2717]: E0905 00:37:50.925142 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fd82addadce11f886be1d621689a34437168a69564462978118c0f87967296b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-vmwjg" Sep 5 00:37:50.925188 kubelet[2717]: E0905 00:37:50.925161 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fd82addadce11f886be1d621689a34437168a69564462978118c0f87967296b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-vmwjg" Sep 5 00:37:50.925437 kubelet[2717]: E0905 00:37:50.925223 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-vmwjg_calico-system(e3f5473f-8e14-474e-97d3-d029001dc5fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-vmwjg_calico-system(e3f5473f-8e14-474e-97d3-d029001dc5fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fd82addadce11f886be1d621689a34437168a69564462978118c0f87967296b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-vmwjg" podUID="e3f5473f-8e14-474e-97d3-d029001dc5fc" Sep 5 00:37:50.926332 containerd[1560]: time="2025-09-05T00:37:50.926250126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mkvsp,Uid:6e23c2dc-ebba-4001-ae59-3bafc36e6807,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6216c82c67816c30d07447aa090344e96196dc760da5233fdd9477f30e21efd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.926513 kubelet[2717]: E0905 00:37:50.926483 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6216c82c67816c30d07447aa090344e96196dc760da5233fdd9477f30e21efd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.926576 kubelet[2717]: E0905 00:37:50.926519 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6216c82c67816c30d07447aa090344e96196dc760da5233fdd9477f30e21efd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mkvsp" Sep 5 00:37:50.926576 kubelet[2717]: E0905 00:37:50.926549 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6216c82c67816c30d07447aa090344e96196dc760da5233fdd9477f30e21efd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mkvsp" Sep 5 00:37:50.926625 kubelet[2717]: E0905 00:37:50.926581 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mkvsp_kube-system(6e23c2dc-ebba-4001-ae59-3bafc36e6807)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mkvsp_kube-system(6e23c2dc-ebba-4001-ae59-3bafc36e6807)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6216c82c67816c30d07447aa090344e96196dc760da5233fdd9477f30e21efd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mkvsp" podUID="6e23c2dc-ebba-4001-ae59-3bafc36e6807" Sep 5 00:37:50.927586 containerd[1560]: time="2025-09-05T00:37:50.927546223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cb865578-qrpwz,Uid:9bc6b01a-afd8-4e16-a565-1883e0fbea24,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f998215ce809cf52342d83f075b50685953c72c7ea39aac5cf13f5b452acdf0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.927762 kubelet[2717]: E0905 00:37:50.927724 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f998215ce809cf52342d83f075b50685953c72c7ea39aac5cf13f5b452acdf0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.927829 kubelet[2717]: E0905 00:37:50.927762 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f998215ce809cf52342d83f075b50685953c72c7ea39aac5cf13f5b452acdf0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55cb865578-qrpwz" Sep 5 00:37:50.927829 kubelet[2717]: E0905 00:37:50.927781 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f998215ce809cf52342d83f075b50685953c72c7ea39aac5cf13f5b452acdf0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55cb865578-qrpwz" Sep 5 00:37:50.927920 kubelet[2717]: E0905 00:37:50.927816 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55cb865578-qrpwz_calico-apiserver(9bc6b01a-afd8-4e16-a565-1883e0fbea24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55cb865578-qrpwz_calico-apiserver(9bc6b01a-afd8-4e16-a565-1883e0fbea24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f998215ce809cf52342d83f075b50685953c72c7ea39aac5cf13f5b452acdf0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55cb865578-qrpwz" podUID="9bc6b01a-afd8-4e16-a565-1883e0fbea24" Sep 5 00:37:50.928964 containerd[1560]: time="2025-09-05T00:37:50.928922482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sr49w,Uid:4f591032-6561-4a75-a88e-cbe4c8ece143,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8217536535aaa5dc0118b4624b51974ba7b7d6703808dfee882688e7ad810f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.929121 kubelet[2717]: E0905 00:37:50.929073 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8217536535aaa5dc0118b4624b51974ba7b7d6703808dfee882688e7ad810f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.929121 kubelet[2717]: E0905 00:37:50.929107 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8217536535aaa5dc0118b4624b51974ba7b7d6703808dfee882688e7ad810f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sr49w" Sep 5 00:37:50.929240 kubelet[2717]: E0905 00:37:50.929127 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8217536535aaa5dc0118b4624b51974ba7b7d6703808dfee882688e7ad810f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sr49w" Sep 5 00:37:50.929240 kubelet[2717]: E0905 00:37:50.929185 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sr49w_calico-system(4f591032-6561-4a75-a88e-cbe4c8ece143)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sr49w_calico-system(4f591032-6561-4a75-a88e-cbe4c8ece143)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8217536535aaa5dc0118b4624b51974ba7b7d6703808dfee882688e7ad810f0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sr49w" podUID="4f591032-6561-4a75-a88e-cbe4c8ece143" Sep 5 00:37:50.930208 containerd[1560]: time="2025-09-05T00:37:50.930150183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c46b8858-djlns,Uid:88cce520-c0f7-4ac8-8198-0a3910b138e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b1abb86a2742caaadc6882a26acdfba06d2c1bc453090702e4c88548b983007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.930339 kubelet[2717]: E0905 00:37:50.930308 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b1abb86a2742caaadc6882a26acdfba06d2c1bc453090702e4c88548b983007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:50.930400 kubelet[2717]: E0905 00:37:50.930341 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b1abb86a2742caaadc6882a26acdfba06d2c1bc453090702e4c88548b983007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54c46b8858-djlns" Sep 5 00:37:50.930400 kubelet[2717]: E0905 00:37:50.930372 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b1abb86a2742caaadc6882a26acdfba06d2c1bc453090702e4c88548b983007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54c46b8858-djlns" Sep 5 00:37:50.930472 kubelet[2717]: E0905 00:37:50.930403 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54c46b8858-djlns_calico-system(88cce520-c0f7-4ac8-8198-0a3910b138e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54c46b8858-djlns_calico-system(88cce520-c0f7-4ac8-8198-0a3910b138e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b1abb86a2742caaadc6882a26acdfba06d2c1bc453090702e4c88548b983007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54c46b8858-djlns" podUID="88cce520-c0f7-4ac8-8198-0a3910b138e3" Sep 5 00:37:57.857516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371625345.mount: Deactivated successfully. Sep 5 00:37:58.891132 containerd[1560]: time="2025-09-05T00:37:58.891067311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:58.894042 containerd[1560]: time="2025-09-05T00:37:58.894008677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:37:58.895666 containerd[1560]: time="2025-09-05T00:37:58.895605989Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:58.898249 containerd[1560]: time="2025-09-05T00:37:58.898214387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:58.898905 containerd[1560]: time="2025-09-05T00:37:58.898852827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.256918753s" Sep 5 00:37:58.898952 containerd[1560]: time="2025-09-05T00:37:58.898919950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:37:58.914094 containerd[1560]: time="2025-09-05T00:37:58.914023972Z" level=info msg="CreateContainer within sandbox \"e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:37:58.924084 containerd[1560]: time="2025-09-05T00:37:58.924030792Z" level=info msg="Container c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:58.949568 containerd[1560]: time="2025-09-05T00:37:58.949499932Z" level=info msg="CreateContainer within sandbox \"e71be5d4cba112fa77a1cd55a0f4979e1dca372d9ff1b05567ac4e1f4ba7957c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b\"" Sep 5 00:37:58.950160 containerd[1560]: time="2025-09-05T00:37:58.950126399Z" level=info msg="StartContainer for \"c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b\"" Sep 5 00:37:58.951923 containerd[1560]: time="2025-09-05T00:37:58.951871083Z" level=info msg="connecting to shim c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b" address="unix:///run/containerd/s/295de2a3d6480469c78070dd3888d4fe625d63bf8e34a47181f8fc044a189642" protocol=ttrpc version=3 Sep 5 00:37:58.976100 systemd[1]: Started cri-containerd-c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b.scope - libcontainer container c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b. Sep 5 00:37:59.029424 containerd[1560]: time="2025-09-05T00:37:59.029376078Z" level=info msg="StartContainer for \"c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b\" returns successfully" Sep 5 00:37:59.171958 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:37:59.172753 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:37:59.489071 kubelet[2717]: I0905 00:37:59.489007 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-whisker-ca-bundle\") pod \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\" (UID: \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\") " Sep 5 00:37:59.489071 kubelet[2717]: I0905 00:37:59.489060 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-whisker-backend-key-pair\") pod \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\" (UID: \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\") " Sep 5 00:37:59.489585 kubelet[2717]: I0905 00:37:59.489093 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wns4\" (UniqueName: \"kubernetes.io/projected/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-kube-api-access-7wns4\") pod \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\" (UID: \"8ead0dd3-08b4-47ab-8942-ea53d5aa41ef\") " Sep 5 00:37:59.489585 kubelet[2717]: I0905 00:37:59.489559 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8ead0dd3-08b4-47ab-8942-ea53d5aa41ef" (UID: "8ead0dd3-08b4-47ab-8942-ea53d5aa41ef"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:37:59.492973 kubelet[2717]: I0905 00:37:59.492930 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8ead0dd3-08b4-47ab-8942-ea53d5aa41ef" (UID: "8ead0dd3-08b4-47ab-8942-ea53d5aa41ef"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 00:37:59.493533 kubelet[2717]: I0905 00:37:59.493515 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-kube-api-access-7wns4" (OuterVolumeSpecName: "kube-api-access-7wns4") pod "8ead0dd3-08b4-47ab-8942-ea53d5aa41ef" (UID: "8ead0dd3-08b4-47ab-8942-ea53d5aa41ef"). InnerVolumeSpecName "kube-api-access-7wns4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:37:59.589392 kubelet[2717]: I0905 00:37:59.589327 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:37:59.589392 kubelet[2717]: I0905 00:37:59.589362 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:37:59.589392 kubelet[2717]: I0905 00:37:59.589376 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wns4\" (UniqueName: \"kubernetes.io/projected/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef-kube-api-access-7wns4\") on node \"localhost\" DevicePath \"\"" Sep 5 00:37:59.680431 kubelet[2717]: I0905 00:37:59.680381 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:37:59.680867 kubelet[2717]: E0905 00:37:59.680832 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:59.683718 systemd[1]: Removed slice kubepods-besteffort-pod8ead0dd3_08b4_47ab_8942_ea53d5aa41ef.slice - libcontainer container kubepods-besteffort-pod8ead0dd3_08b4_47ab_8942_ea53d5aa41ef.slice. Sep 5 00:37:59.690073 kubelet[2717]: I0905 00:37:59.689966 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fz799" podStartSLOduration=1.7964029479999999 podStartE2EDuration="21.689945111s" podCreationTimestamp="2025-09-05 00:37:38 +0000 UTC" firstStartedPulling="2025-09-05 00:37:39.009557647 +0000 UTC m=+20.581880560" lastFinishedPulling="2025-09-05 00:37:58.90309981 +0000 UTC m=+40.475422723" observedRunningTime="2025-09-05 00:37:59.688433731 +0000 UTC m=+41.260756664" watchObservedRunningTime="2025-09-05 00:37:59.689945111 +0000 UTC m=+41.262268024" Sep 5 00:37:59.756662 systemd[1]: Created slice kubepods-besteffort-podf4f22621_7058_4d37_9e68_df87fc039978.slice - libcontainer container kubepods-besteffort-podf4f22621_7058_4d37_9e68_df87fc039978.slice. Sep 5 00:37:59.890796 kubelet[2717]: I0905 00:37:59.890738 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch44z\" (UniqueName: \"kubernetes.io/projected/f4f22621-7058-4d37-9e68-df87fc039978-kube-api-access-ch44z\") pod \"whisker-6f6c47d9dc-zzbkq\" (UID: \"f4f22621-7058-4d37-9e68-df87fc039978\") " pod="calico-system/whisker-6f6c47d9dc-zzbkq" Sep 5 00:37:59.890796 kubelet[2717]: I0905 00:37:59.890788 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4f22621-7058-4d37-9e68-df87fc039978-whisker-backend-key-pair\") pod \"whisker-6f6c47d9dc-zzbkq\" (UID: \"f4f22621-7058-4d37-9e68-df87fc039978\") " pod="calico-system/whisker-6f6c47d9dc-zzbkq" Sep 5 00:37:59.890796 kubelet[2717]: I0905 00:37:59.890806 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f22621-7058-4d37-9e68-df87fc039978-whisker-ca-bundle\") pod \"whisker-6f6c47d9dc-zzbkq\" (UID: \"f4f22621-7058-4d37-9e68-df87fc039978\") " pod="calico-system/whisker-6f6c47d9dc-zzbkq" Sep 5 00:37:59.908479 systemd[1]: var-lib-kubelet-pods-8ead0dd3\x2d08b4\x2d47ab\x2d8942\x2dea53d5aa41ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7wns4.mount: Deactivated successfully. Sep 5 00:37:59.908589 systemd[1]: var-lib-kubelet-pods-8ead0dd3\x2d08b4\x2d47ab\x2d8942\x2dea53d5aa41ef-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:38:00.061941 containerd[1560]: time="2025-09-05T00:38:00.061763299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6c47d9dc-zzbkq,Uid:f4f22621-7058-4d37-9e68-df87fc039978,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:00.540328 kubelet[2717]: I0905 00:38:00.540258 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ead0dd3-08b4-47ab-8942-ea53d5aa41ef" path="/var/lib/kubelet/pods/8ead0dd3-08b4-47ab-8942-ea53d5aa41ef/volumes" Sep 5 00:38:00.671083 kubelet[2717]: E0905 00:38:00.671040 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:00.857058 systemd-networkd[1469]: cali04a21a2e32a: Link UP Sep 5 00:38:00.857291 systemd-networkd[1469]: cali04a21a2e32a: Gained carrier Sep 5 00:38:00.889073 containerd[1560]: 2025-09-05 00:38:00.085 [INFO][3934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:38:00.889073 containerd[1560]: 2025-09-05 00:38:00.105 [INFO][3934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0 whisker-6f6c47d9dc- calico-system f4f22621-7058-4d37-9e68-df87fc039978 950 0 2025-09-05 00:37:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f6c47d9dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6f6c47d9dc-zzbkq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali04a21a2e32a [] [] }} ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Namespace="calico-system" Pod="whisker-6f6c47d9dc-zzbkq" WorkloadEndpoint="localhost-k8s-whisker--6f6c47d9dc--zzbkq-" Sep 5 00:38:00.889073 containerd[1560]: 2025-09-05 00:38:00.106 [INFO][3934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Namespace="calico-system" Pod="whisker-6f6c47d9dc-zzbkq" WorkloadEndpoint="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" Sep 5 00:38:00.889073 containerd[1560]: 2025-09-05 00:38:00.175 [INFO][3950] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" HandleID="k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Workload="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.176 [INFO][3950] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" HandleID="k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Workload="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b7600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6f6c47d9dc-zzbkq", "timestamp":"2025-09-05 00:38:00.175773053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.176 [INFO][3950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.176 [INFO][3950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.176 [INFO][3950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.357 [INFO][3950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" host="localhost" Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.463 [INFO][3950] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.466 [INFO][3950] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.468 [INFO][3950] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.470 [INFO][3950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:00.889620 containerd[1560]: 2025-09-05 00:38:00.470 [INFO][3950] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" host="localhost" Sep 5 00:38:00.891302 containerd[1560]: 2025-09-05 00:38:00.471 [INFO][3950] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5 Sep 5 00:38:00.891302 containerd[1560]: 2025-09-05 00:38:00.491 [INFO][3950] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" host="localhost" Sep 5 00:38:00.891302 containerd[1560]: 2025-09-05 00:38:00.831 [INFO][3950] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" host="localhost" Sep 5 00:38:00.891302 containerd[1560]: 2025-09-05 00:38:00.831 [INFO][3950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" host="localhost" Sep 5 00:38:00.891302 containerd[1560]: 2025-09-05 00:38:00.831 [INFO][3950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:00.891302 containerd[1560]: 2025-09-05 00:38:00.831 [INFO][3950] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" HandleID="k8s-pod-network.6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Workload="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" Sep 5 00:38:00.891482 containerd[1560]: 2025-09-05 00:38:00.834 [INFO][3934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Namespace="calico-system" Pod="whisker-6f6c47d9dc-zzbkq" WorkloadEndpoint="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0", GenerateName:"whisker-6f6c47d9dc-", Namespace:"calico-system", SelfLink:"", UID:"f4f22621-7058-4d37-9e68-df87fc039978", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f6c47d9dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6f6c47d9dc-zzbkq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali04a21a2e32a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:00.891482 containerd[1560]: 2025-09-05 00:38:00.834 [INFO][3934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Namespace="calico-system" Pod="whisker-6f6c47d9dc-zzbkq" WorkloadEndpoint="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" Sep 5 00:38:00.891595 containerd[1560]: 2025-09-05 00:38:00.834 [INFO][3934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04a21a2e32a ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Namespace="calico-system" Pod="whisker-6f6c47d9dc-zzbkq" WorkloadEndpoint="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" Sep 5 00:38:00.891595 containerd[1560]: 2025-09-05 00:38:00.857 [INFO][3934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Namespace="calico-system" Pod="whisker-6f6c47d9dc-zzbkq" WorkloadEndpoint="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" Sep 5 00:38:00.891639 containerd[1560]: 2025-09-05 00:38:00.858 [INFO][3934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Namespace="calico-system" Pod="whisker-6f6c47d9dc-zzbkq" WorkloadEndpoint="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0", GenerateName:"whisker-6f6c47d9dc-", Namespace:"calico-system", SelfLink:"", UID:"f4f22621-7058-4d37-9e68-df87fc039978", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f6c47d9dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5", Pod:"whisker-6f6c47d9dc-zzbkq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali04a21a2e32a", MAC:"0a:81:da:10:e7:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:00.891695 containerd[1560]: 2025-09-05 00:38:00.880 [INFO][3934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" Namespace="calico-system" Pod="whisker-6f6c47d9dc-zzbkq" WorkloadEndpoint="localhost-k8s-whisker--6f6c47d9dc--zzbkq-eth0" Sep 5 00:38:01.397056 containerd[1560]: time="2025-09-05T00:38:01.396970798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b\" id:\"482624de5864600c506a9f56ccad02c8d2d877d7df817e1c1a5b759ead4c0c17\" pid:4065 exit_status:1 exited_at:{seconds:1757032680 nanos:974329445}" Sep 5 00:38:01.438254 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:56948.service - OpenSSH per-connection server daemon (10.0.0.1:56948). Sep 5 00:38:01.478152 systemd-networkd[1469]: vxlan.calico: Link UP Sep 5 00:38:01.478163 systemd-networkd[1469]: vxlan.calico: Gained carrier Sep 5 00:38:01.518783 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 56948 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:01.520654 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:01.529407 systemd-logind[1537]: New session 8 of user core. Sep 5 00:38:01.534828 containerd[1560]: time="2025-09-05T00:38:01.534757565Z" level=info msg="connecting to shim 6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5" address="unix:///run/containerd/s/f6ebe48d03ba851465407de817f426366c9d54c1f5c534dec7e670ffbaee59e0" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:01.537087 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:38:01.538381 containerd[1560]: time="2025-09-05T00:38:01.538219096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sr49w,Uid:4f591032-6561-4a75-a88e-cbe4c8ece143,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:01.560277 containerd[1560]: time="2025-09-05T00:38:01.560220390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cb865578-qrpwz,Uid:9bc6b01a-afd8-4e16-a565-1883e0fbea24,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:38:01.572171 systemd[1]: Started cri-containerd-6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5.scope - libcontainer container 6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5. Sep 5 00:38:01.626853 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:01.729778 containerd[1560]: time="2025-09-05T00:38:01.729691582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6c47d9dc-zzbkq,Uid:f4f22621-7058-4d37-9e68-df87fc039978,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5\"" Sep 5 00:38:01.740226 containerd[1560]: time="2025-09-05T00:38:01.739415136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:38:01.759440 sshd[4181]: Connection closed by 10.0.0.1 port 56948 Sep 5 00:38:01.761238 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:01.763215 systemd-networkd[1469]: cali572ce05e14e: Link UP Sep 5 00:38:01.763815 systemd-networkd[1469]: cali572ce05e14e: Gained carrier Sep 5 00:38:01.773262 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:56948.service: Deactivated successfully. Sep 5 00:38:01.776443 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:38:01.780115 systemd-logind[1537]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:38:01.781668 systemd-logind[1537]: Removed session 8. Sep 5 00:38:01.834204 containerd[1560]: time="2025-09-05T00:38:01.834128156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b\" id:\"370bcc45e314e89014e7f936e67e17ee10562b2354f97e4afe9c57325eef66a8\" pid:4277 exit_status:1 exited_at:{seconds:1757032681 nanos:833766073}" Sep 5 00:38:01.862925 containerd[1560]: 2025-09-05 00:38:01.625 [INFO][4196] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sr49w-eth0 csi-node-driver- calico-system 4f591032-6561-4a75-a88e-cbe4c8ece143 749 0 2025-09-05 00:37:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sr49w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali572ce05e14e [] [] }} ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Namespace="calico-system" Pod="csi-node-driver-sr49w" WorkloadEndpoint="localhost-k8s-csi--node--driver--sr49w-" Sep 5 00:38:01.862925 containerd[1560]: 2025-09-05 00:38:01.625 [INFO][4196] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Namespace="calico-system" Pod="csi-node-driver-sr49w" WorkloadEndpoint="localhost-k8s-csi--node--driver--sr49w-eth0" Sep 5 00:38:01.862925 containerd[1560]: 2025-09-05 00:38:01.683 [INFO][4240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" HandleID="k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Workload="localhost-k8s-csi--node--driver--sr49w-eth0" Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.684 [INFO][4240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" HandleID="k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Workload="localhost-k8s-csi--node--driver--sr49w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004feb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sr49w", "timestamp":"2025-09-05 00:38:01.683412719 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.684 [INFO][4240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.684 [INFO][4240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.684 [INFO][4240] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.696 [INFO][4240] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" host="localhost" Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.702 [INFO][4240] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.713 [INFO][4240] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.717 [INFO][4240] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.720 [INFO][4240] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:01.863211 containerd[1560]: 2025-09-05 00:38:01.720 [INFO][4240] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" host="localhost" Sep 5 00:38:01.863513 containerd[1560]: 2025-09-05 00:38:01.731 [INFO][4240] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696 Sep 5 00:38:01.863513 containerd[1560]: 2025-09-05 00:38:01.738 [INFO][4240] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" host="localhost" Sep 5 00:38:01.863513 containerd[1560]: 2025-09-05 00:38:01.753 [INFO][4240] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" host="localhost" Sep 5 00:38:01.863513 containerd[1560]: 2025-09-05 00:38:01.753 [INFO][4240] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" host="localhost" Sep 5 00:38:01.863513 containerd[1560]: 2025-09-05 00:38:01.753 [INFO][4240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:01.863513 containerd[1560]: 2025-09-05 00:38:01.753 [INFO][4240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" HandleID="k8s-pod-network.c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Workload="localhost-k8s-csi--node--driver--sr49w-eth0" Sep 5 00:38:01.863705 containerd[1560]: 2025-09-05 00:38:01.758 [INFO][4196] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Namespace="calico-system" Pod="csi-node-driver-sr49w" WorkloadEndpoint="localhost-k8s-csi--node--driver--sr49w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sr49w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f591032-6561-4a75-a88e-cbe4c8ece143", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sr49w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali572ce05e14e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:01.863795 containerd[1560]: 2025-09-05 00:38:01.758 [INFO][4196] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Namespace="calico-system" Pod="csi-node-driver-sr49w" WorkloadEndpoint="localhost-k8s-csi--node--driver--sr49w-eth0" Sep 5 00:38:01.863795 containerd[1560]: 2025-09-05 00:38:01.758 [INFO][4196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali572ce05e14e ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Namespace="calico-system" Pod="csi-node-driver-sr49w" WorkloadEndpoint="localhost-k8s-csi--node--driver--sr49w-eth0" Sep 5 00:38:01.863795 containerd[1560]: 2025-09-05 00:38:01.763 [INFO][4196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Namespace="calico-system" Pod="csi-node-driver-sr49w" WorkloadEndpoint="localhost-k8s-csi--node--driver--sr49w-eth0" Sep 5 00:38:01.863910 containerd[1560]: 2025-09-05 00:38:01.764 [INFO][4196] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Namespace="calico-system" Pod="csi-node-driver-sr49w" WorkloadEndpoint="localhost-k8s-csi--node--driver--sr49w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sr49w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f591032-6561-4a75-a88e-cbe4c8ece143", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696", Pod:"csi-node-driver-sr49w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali572ce05e14e", MAC:"4a:0d:1f:56:c3:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:01.863988 containerd[1560]: 2025-09-05 00:38:01.857 [INFO][4196] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" Namespace="calico-system" Pod="csi-node-driver-sr49w" WorkloadEndpoint="localhost-k8s-csi--node--driver--sr49w-eth0" Sep 5 00:38:01.897956 containerd[1560]: time="2025-09-05T00:38:01.897732241Z" level=info msg="connecting to shim c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696" address="unix:///run/containerd/s/13ab2f0110fbdf668bdacf243137c3b88a07f72fe458db7b80d2d021af889634" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:01.899394 systemd-networkd[1469]: calie89605ef07e: Link UP Sep 5 00:38:01.899585 systemd-networkd[1469]: calie89605ef07e: Gained carrier Sep 5 00:38:01.926105 containerd[1560]: 2025-09-05 00:38:01.682 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0 calico-apiserver-55cb865578- calico-apiserver 9bc6b01a-afd8-4e16-a565-1883e0fbea24 876 0 2025-09-05 00:37:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55cb865578 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55cb865578-qrpwz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie89605ef07e [] [] }} ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-qrpwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--qrpwz-" Sep 5 00:38:01.926105 containerd[1560]: 2025-09-05 00:38:01.684 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-qrpwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" Sep 5 00:38:01.926105 containerd[1560]: 2025-09-05 00:38:01.738 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" HandleID="k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Workload="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.738 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" HandleID="k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Workload="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051aaf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55cb865578-qrpwz", "timestamp":"2025-09-05 00:38:01.73816702 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.738 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.753 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.753 [INFO][4264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.795 [INFO][4264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" host="localhost" Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.857 [INFO][4264] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.866 [INFO][4264] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.869 [INFO][4264] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.873 [INFO][4264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:01.926395 containerd[1560]: 2025-09-05 00:38:01.873 [INFO][4264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" host="localhost" Sep 5 00:38:01.926741 containerd[1560]: 2025-09-05 00:38:01.876 [INFO][4264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79 Sep 5 00:38:01.926741 containerd[1560]: 2025-09-05 00:38:01.881 [INFO][4264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" host="localhost" Sep 5 00:38:01.926741 containerd[1560]: 2025-09-05 00:38:01.890 [INFO][4264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" host="localhost" Sep 5 00:38:01.926741 containerd[1560]: 2025-09-05 00:38:01.892 [INFO][4264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" host="localhost" Sep 5 00:38:01.926741 containerd[1560]: 2025-09-05 00:38:01.893 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:01.926741 containerd[1560]: 2025-09-05 00:38:01.893 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" HandleID="k8s-pod-network.8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Workload="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" Sep 5 00:38:01.926867 containerd[1560]: 2025-09-05 00:38:01.896 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-qrpwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0", GenerateName:"calico-apiserver-55cb865578-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bc6b01a-afd8-4e16-a565-1883e0fbea24", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cb865578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55cb865578-qrpwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie89605ef07e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:01.926971 containerd[1560]: 2025-09-05 00:38:01.896 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-qrpwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" Sep 5 00:38:01.926971 containerd[1560]: 2025-09-05 00:38:01.897 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie89605ef07e ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-qrpwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" Sep 5 00:38:01.926971 containerd[1560]: 2025-09-05 00:38:01.899 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-qrpwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" Sep 5 00:38:01.927048 containerd[1560]: 2025-09-05 00:38:01.907 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-qrpwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0", GenerateName:"calico-apiserver-55cb865578-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bc6b01a-afd8-4e16-a565-1883e0fbea24", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cb865578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79", Pod:"calico-apiserver-55cb865578-qrpwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie89605ef07e", MAC:"1a:30:16:47:7c:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:01.927108 containerd[1560]: 2025-09-05 00:38:01.919 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-qrpwz" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--qrpwz-eth0" Sep 5 00:38:01.940116 systemd[1]: Started cri-containerd-c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696.scope - libcontainer container c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696. Sep 5 00:38:01.956416 containerd[1560]: time="2025-09-05T00:38:01.956301269Z" level=info msg="connecting to shim 8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79" address="unix:///run/containerd/s/d67ce1165bfc7f3866b342374bc8812b91ffada7728ea9ebb69fd5ee970ad253" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:01.961050 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:01.989466 systemd[1]: Started cri-containerd-8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79.scope - libcontainer container 8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79. Sep 5 00:38:01.993214 containerd[1560]: time="2025-09-05T00:38:01.993168283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sr49w,Uid:4f591032-6561-4a75-a88e-cbe4c8ece143,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696\"" Sep 5 00:38:02.009363 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:02.044495 containerd[1560]: time="2025-09-05T00:38:02.044449444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cb865578-qrpwz,Uid:9bc6b01a-afd8-4e16-a565-1883e0fbea24,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79\"" Sep 5 00:38:02.295103 systemd-networkd[1469]: cali04a21a2e32a: Gained IPv6LL Sep 5 00:38:02.808472 systemd-networkd[1469]: vxlan.calico: Gained IPv6LL Sep 5 00:38:03.063142 systemd-networkd[1469]: calie89605ef07e: Gained IPv6LL Sep 5 00:38:03.383166 systemd-networkd[1469]: cali572ce05e14e: Gained IPv6LL Sep 5 00:38:03.536958 kubelet[2717]: E0905 00:38:03.536909 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:03.537744 containerd[1560]: time="2025-09-05T00:38:03.537695125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c46b8858-djlns,Uid:88cce520-c0f7-4ac8-8198-0a3910b138e3,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:03.539055 containerd[1560]: time="2025-09-05T00:38:03.537747859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-vmwjg,Uid:e3f5473f-8e14-474e-97d3-d029001dc5fc,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:03.539135 containerd[1560]: time="2025-09-05T00:38:03.538883129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mkvsp,Uid:6e23c2dc-ebba-4001-ae59-3bafc36e6807,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:03.641424 containerd[1560]: time="2025-09-05T00:38:03.641265571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:03.643125 containerd[1560]: time="2025-09-05T00:38:03.643085677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 00:38:03.644323 containerd[1560]: time="2025-09-05T00:38:03.644288569Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:03.652409 containerd[1560]: time="2025-09-05T00:38:03.652333111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:03.653229 containerd[1560]: time="2025-09-05T00:38:03.653175897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.913718728s" Sep 5 00:38:03.653229 containerd[1560]: time="2025-09-05T00:38:03.653211467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 00:38:03.656627 containerd[1560]: time="2025-09-05T00:38:03.656581006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:38:03.658622 containerd[1560]: time="2025-09-05T00:38:03.658499294Z" level=info msg="CreateContainer within sandbox \"6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:38:03.680419 systemd-networkd[1469]: calia95c1c243d0: Link UP Sep 5 00:38:03.681298 systemd-networkd[1469]: calia95c1c243d0: Gained carrier Sep 5 00:38:03.682295 containerd[1560]: time="2025-09-05T00:38:03.682202305Z" level=info msg="Container 2c394418f26bcbc78eb46084c95899ac3c9b3dc5285e676d536f3fc9628f5308: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:03.689518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623209759.mount: Deactivated successfully. Sep 5 00:38:03.703044 containerd[1560]: time="2025-09-05T00:38:03.702991350Z" level=info msg="CreateContainer within sandbox \"6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2c394418f26bcbc78eb46084c95899ac3c9b3dc5285e676d536f3fc9628f5308\"" Sep 5 00:38:03.703785 containerd[1560]: time="2025-09-05T00:38:03.703752606Z" level=info msg="StartContainer for \"2c394418f26bcbc78eb46084c95899ac3c9b3dc5285e676d536f3fc9628f5308\"" Sep 5 00:38:03.705806 containerd[1560]: time="2025-09-05T00:38:03.705711814Z" level=info msg="connecting to shim 2c394418f26bcbc78eb46084c95899ac3c9b3dc5285e676d536f3fc9628f5308" address="unix:///run/containerd/s/f6ebe48d03ba851465407de817f426366c9d54c1f5c534dec7e670ffbaee59e0" protocol=ttrpc version=3 Sep 5 00:38:03.708772 containerd[1560]: 2025-09-05 00:38:03.586 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0 calico-kube-controllers-54c46b8858- calico-system 88cce520-c0f7-4ac8-8198-0a3910b138e3 873 0 2025-09-05 00:37:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54c46b8858 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54c46b8858-djlns eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia95c1c243d0 [] [] }} ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Namespace="calico-system" Pod="calico-kube-controllers-54c46b8858-djlns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-" Sep 5 00:38:03.708772 containerd[1560]: 2025-09-05 00:38:03.587 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Namespace="calico-system" Pod="calico-kube-controllers-54c46b8858-djlns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" Sep 5 00:38:03.708772 containerd[1560]: 2025-09-05 00:38:03.634 [INFO][4482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" HandleID="k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Workload="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.634 [INFO][4482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" HandleID="k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Workload="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54c46b8858-djlns", "timestamp":"2025-09-05 00:38:03.634251864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.634 [INFO][4482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.634 [INFO][4482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.635 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.643 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" host="localhost" Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.648 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.652 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.654 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.657 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:03.709131 containerd[1560]: 2025-09-05 00:38:03.657 [INFO][4482] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" host="localhost" Sep 5 00:38:03.709480 containerd[1560]: 2025-09-05 00:38:03.660 [INFO][4482] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c Sep 5 00:38:03.709480 containerd[1560]: 2025-09-05 00:38:03.665 [INFO][4482] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" host="localhost" Sep 5 00:38:03.709480 containerd[1560]: 2025-09-05 00:38:03.672 [INFO][4482] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" host="localhost" Sep 5 00:38:03.709480 containerd[1560]: 2025-09-05 00:38:03.672 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" host="localhost" Sep 5 00:38:03.709480 containerd[1560]: 2025-09-05 00:38:03.672 [INFO][4482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:03.709480 containerd[1560]: 2025-09-05 00:38:03.672 [INFO][4482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" HandleID="k8s-pod-network.4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Workload="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" Sep 5 00:38:03.709610 containerd[1560]: 2025-09-05 00:38:03.676 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Namespace="calico-system" Pod="calico-kube-controllers-54c46b8858-djlns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0", GenerateName:"calico-kube-controllers-54c46b8858-", Namespace:"calico-system", SelfLink:"", UID:"88cce520-c0f7-4ac8-8198-0a3910b138e3", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c46b8858", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54c46b8858-djlns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia95c1c243d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:03.709677 containerd[1560]: 2025-09-05 00:38:03.676 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Namespace="calico-system" Pod="calico-kube-controllers-54c46b8858-djlns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" Sep 5 00:38:03.709677 containerd[1560]: 2025-09-05 00:38:03.676 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia95c1c243d0 ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Namespace="calico-system" Pod="calico-kube-controllers-54c46b8858-djlns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" Sep 5 00:38:03.709677 containerd[1560]: 2025-09-05 00:38:03.679 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Namespace="calico-system" Pod="calico-kube-controllers-54c46b8858-djlns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" Sep 5 00:38:03.709768 containerd[1560]: 2025-09-05 00:38:03.680 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Namespace="calico-system" Pod="calico-kube-controllers-54c46b8858-djlns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0", GenerateName:"calico-kube-controllers-54c46b8858-", Namespace:"calico-system", SelfLink:"", UID:"88cce520-c0f7-4ac8-8198-0a3910b138e3", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c46b8858", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c", Pod:"calico-kube-controllers-54c46b8858-djlns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia95c1c243d0", MAC:"e2:4b:58:7c:dd:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:03.709826 containerd[1560]: 2025-09-05 00:38:03.694 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" Namespace="calico-system" Pod="calico-kube-controllers-54c46b8858-djlns" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54c46b8858--djlns-eth0" Sep 5 00:38:03.742226 systemd[1]: Started cri-containerd-2c394418f26bcbc78eb46084c95899ac3c9b3dc5285e676d536f3fc9628f5308.scope - libcontainer container 2c394418f26bcbc78eb46084c95899ac3c9b3dc5285e676d536f3fc9628f5308. Sep 5 00:38:03.743782 containerd[1560]: time="2025-09-05T00:38:03.743721027Z" level=info msg="connecting to shim 4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c" address="unix:///run/containerd/s/237e8b650eeee7ea1cb9dc47e0316e3afb354c9acacc803e9ca48157235766e2" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:03.780162 systemd[1]: Started cri-containerd-4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c.scope - libcontainer container 4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c. Sep 5 00:38:03.792305 systemd-networkd[1469]: cali3f9fe6ce0f4: Link UP Sep 5 00:38:03.794492 systemd-networkd[1469]: cali3f9fe6ce0f4: Gained carrier Sep 5 00:38:03.817574 containerd[1560]: 2025-09-05 00:38:03.605 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0 coredns-7c65d6cfc9- kube-system 6e23c2dc-ebba-4001-ae59-3bafc36e6807 864 0 2025-09-05 00:37:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mkvsp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f9fe6ce0f4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mkvsp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mkvsp-" Sep 5 00:38:03.817574 containerd[1560]: 2025-09-05 00:38:03.605 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mkvsp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" Sep 5 00:38:03.817574 containerd[1560]: 2025-09-05 00:38:03.642 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" HandleID="k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Workload="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.643 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" HandleID="k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Workload="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mkvsp", "timestamp":"2025-09-05 00:38:03.642720148 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.643 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.672 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.672 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.744 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" host="localhost" Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.756 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.763 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.765 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.768 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:03.817863 containerd[1560]: 2025-09-05 00:38:03.768 [INFO][4490] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" host="localhost" Sep 5 00:38:03.818222 containerd[1560]: 2025-09-05 00:38:03.770 [INFO][4490] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04 Sep 5 00:38:03.818222 containerd[1560]: 2025-09-05 00:38:03.774 [INFO][4490] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" host="localhost" Sep 5 00:38:03.818222 containerd[1560]: 2025-09-05 00:38:03.781 [INFO][4490] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" host="localhost" Sep 5 00:38:03.818222 containerd[1560]: 2025-09-05 00:38:03.781 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" host="localhost" Sep 5 00:38:03.818222 containerd[1560]: 2025-09-05 00:38:03.781 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:03.818222 containerd[1560]: 2025-09-05 00:38:03.782 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" HandleID="k8s-pod-network.bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Workload="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" Sep 5 00:38:03.818396 containerd[1560]: 2025-09-05 00:38:03.789 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mkvsp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6e23c2dc-ebba-4001-ae59-3bafc36e6807", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mkvsp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f9fe6ce0f4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:03.818494 containerd[1560]: 2025-09-05 00:38:03.789 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mkvsp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" Sep 5 00:38:03.818494 containerd[1560]: 2025-09-05 00:38:03.789 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f9fe6ce0f4 ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mkvsp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" Sep 5 00:38:03.818494 containerd[1560]: 2025-09-05 00:38:03.793 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mkvsp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" Sep 5 00:38:03.818591 containerd[1560]: 2025-09-05 00:38:03.793 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mkvsp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6e23c2dc-ebba-4001-ae59-3bafc36e6807", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04", Pod:"coredns-7c65d6cfc9-mkvsp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f9fe6ce0f4", MAC:"46:03:a2:c4:d5:54", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:03.818591 containerd[1560]: 2025-09-05 00:38:03.807 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mkvsp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mkvsp-eth0" Sep 5 00:38:03.825311 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:03.835607 containerd[1560]: time="2025-09-05T00:38:03.835560198Z" level=info msg="StartContainer for \"2c394418f26bcbc78eb46084c95899ac3c9b3dc5285e676d536f3fc9628f5308\" returns successfully" Sep 5 00:38:03.859399 containerd[1560]: time="2025-09-05T00:38:03.859348316Z" level=info msg="connecting to shim bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04" address="unix:///run/containerd/s/5275b364f56a1ad07a1427cb7ac7157f7eddcd2ed5e5e8870037d7ca3440a3b9" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:03.876474 containerd[1560]: time="2025-09-05T00:38:03.876420110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c46b8858-djlns,Uid:88cce520-c0f7-4ac8-8198-0a3910b138e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c\"" Sep 5 00:38:03.891097 systemd[1]: Started cri-containerd-bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04.scope - libcontainer container bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04. Sep 5 00:38:03.896097 systemd-networkd[1469]: calid7bc4611bce: Link UP Sep 5 00:38:03.901561 systemd-networkd[1469]: calid7bc4611bce: Gained carrier Sep 5 00:38:03.911070 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.608 [INFO][4457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--vmwjg-eth0 goldmane-7988f88666- calico-system e3f5473f-8e14-474e-97d3-d029001dc5fc 868 0 2025-09-05 00:37:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-vmwjg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid7bc4611bce [] [] }} ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Namespace="calico-system" Pod="goldmane-7988f88666-vmwjg" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--vmwjg-" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.609 [INFO][4457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Namespace="calico-system" Pod="goldmane-7988f88666-vmwjg" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.664 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" HandleID="k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Workload="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.665 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" HandleID="k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Workload="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-vmwjg", "timestamp":"2025-09-05 00:38:03.663764526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.665 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.781 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.782 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.846 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.857 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.865 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.869 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.872 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.872 [INFO][4496] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.874 [INFO][4496] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36 Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.878 [INFO][4496] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.884 [INFO][4496] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.885 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" host="localhost" Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.885 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:03.925919 containerd[1560]: 2025-09-05 00:38:03.885 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" HandleID="k8s-pod-network.e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Workload="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" Sep 5 00:38:03.926640 containerd[1560]: 2025-09-05 00:38:03.889 [INFO][4457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Namespace="calico-system" Pod="goldmane-7988f88666-vmwjg" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--vmwjg-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e3f5473f-8e14-474e-97d3-d029001dc5fc", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-vmwjg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7bc4611bce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:03.926640 containerd[1560]: 2025-09-05 00:38:03.889 [INFO][4457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Namespace="calico-system" Pod="goldmane-7988f88666-vmwjg" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" Sep 5 00:38:03.926640 containerd[1560]: 2025-09-05 00:38:03.889 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7bc4611bce ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Namespace="calico-system" Pod="goldmane-7988f88666-vmwjg" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" Sep 5 00:38:03.926640 containerd[1560]: 2025-09-05 00:38:03.903 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Namespace="calico-system" Pod="goldmane-7988f88666-vmwjg" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" Sep 5 00:38:03.926640 containerd[1560]: 2025-09-05 00:38:03.905 [INFO][4457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Namespace="calico-system" Pod="goldmane-7988f88666-vmwjg" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--vmwjg-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e3f5473f-8e14-474e-97d3-d029001dc5fc", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36", Pod:"goldmane-7988f88666-vmwjg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7bc4611bce", MAC:"4e:34:c7:6e:f2:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:03.926640 containerd[1560]: 2025-09-05 00:38:03.914 [INFO][4457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" Namespace="calico-system" Pod="goldmane-7988f88666-vmwjg" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--vmwjg-eth0" Sep 5 00:38:03.952383 containerd[1560]: time="2025-09-05T00:38:03.952321726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mkvsp,Uid:6e23c2dc-ebba-4001-ae59-3bafc36e6807,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04\"" Sep 5 00:38:03.953366 kubelet[2717]: E0905 00:38:03.953333 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:03.955366 containerd[1560]: time="2025-09-05T00:38:03.955328451Z" level=info msg="CreateContainer within sandbox \"bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:38:03.970385 containerd[1560]: time="2025-09-05T00:38:03.970321523Z" level=info msg="Container 1589f24c8f09d00eb14da9a92b57409ab72d54132ce677ea3e9c9e74a4b5e945: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:03.972842 containerd[1560]: time="2025-09-05T00:38:03.972806514Z" level=info msg="connecting to shim e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36" address="unix:///run/containerd/s/8caabba9d4a3c385fb1dbe472a3cf41ea576184d200d53ccca001e5908c47a8b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:03.978426 containerd[1560]: time="2025-09-05T00:38:03.978320105Z" level=info msg="CreateContainer within sandbox \"bc8dc9b1c50e3bc668ad5f621b63961098b85ebc4487c866e41c6386f1416b04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1589f24c8f09d00eb14da9a92b57409ab72d54132ce677ea3e9c9e74a4b5e945\"" Sep 5 00:38:03.979350 containerd[1560]: time="2025-09-05T00:38:03.979324487Z" level=info msg="StartContainer for \"1589f24c8f09d00eb14da9a92b57409ab72d54132ce677ea3e9c9e74a4b5e945\"" Sep 5 00:38:03.980090 containerd[1560]: time="2025-09-05T00:38:03.980061686Z" level=info msg="connecting to shim 1589f24c8f09d00eb14da9a92b57409ab72d54132ce677ea3e9c9e74a4b5e945" address="unix:///run/containerd/s/5275b364f56a1ad07a1427cb7ac7157f7eddcd2ed5e5e8870037d7ca3440a3b9" protocol=ttrpc version=3 Sep 5 00:38:04.004103 systemd[1]: Started cri-containerd-1589f24c8f09d00eb14da9a92b57409ab72d54132ce677ea3e9c9e74a4b5e945.scope - libcontainer container 1589f24c8f09d00eb14da9a92b57409ab72d54132ce677ea3e9c9e74a4b5e945. Sep 5 00:38:04.008212 systemd[1]: Started cri-containerd-e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36.scope - libcontainer container e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36. Sep 5 00:38:04.026850 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:04.077431 containerd[1560]: time="2025-09-05T00:38:04.077374841Z" level=info msg="StartContainer for \"1589f24c8f09d00eb14da9a92b57409ab72d54132ce677ea3e9c9e74a4b5e945\" returns successfully" Sep 5 00:38:04.085306 containerd[1560]: time="2025-09-05T00:38:04.085260560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-vmwjg,Uid:e3f5473f-8e14-474e-97d3-d029001dc5fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36\"" Sep 5 00:38:04.688818 kubelet[2717]: E0905 00:38:04.688760 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:04.701590 kubelet[2717]: I0905 00:38:04.701496 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mkvsp" podStartSLOduration=40.70140361 podStartE2EDuration="40.70140361s" podCreationTimestamp="2025-09-05 00:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:04.699537336 +0000 UTC m=+46.271860259" watchObservedRunningTime="2025-09-05 00:38:04.70140361 +0000 UTC m=+46.273726523" Sep 5 00:38:04.920555 systemd-networkd[1469]: calia95c1c243d0: Gained IPv6LL Sep 5 00:38:05.073917 containerd[1560]: time="2025-09-05T00:38:05.073845693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:05.074850 containerd[1560]: time="2025-09-05T00:38:05.074829082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:38:05.076512 containerd[1560]: time="2025-09-05T00:38:05.076443319Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:05.078609 containerd[1560]: time="2025-09-05T00:38:05.078570681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:05.079237 containerd[1560]: time="2025-09-05T00:38:05.079185016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.422561958s" Sep 5 00:38:05.079237 containerd[1560]: time="2025-09-05T00:38:05.079228341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:38:05.080539 containerd[1560]: time="2025-09-05T00:38:05.080028340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:38:05.081488 containerd[1560]: time="2025-09-05T00:38:05.081451611Z" level=info msg="CreateContainer within sandbox \"c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:38:05.094786 containerd[1560]: time="2025-09-05T00:38:05.094739193Z" level=info msg="Container 5d253b737f99fc7368b9756dd7c45f5a53b98c33d403d5bf0bbbf0c2af17cdb6: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:05.103312 containerd[1560]: time="2025-09-05T00:38:05.103260656Z" level=info msg="CreateContainer within sandbox \"c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5d253b737f99fc7368b9756dd7c45f5a53b98c33d403d5bf0bbbf0c2af17cdb6\"" Sep 5 00:38:05.103731 containerd[1560]: time="2025-09-05T00:38:05.103708684Z" level=info msg="StartContainer for \"5d253b737f99fc7368b9756dd7c45f5a53b98c33d403d5bf0bbbf0c2af17cdb6\"" Sep 5 00:38:05.106309 containerd[1560]: time="2025-09-05T00:38:05.106255630Z" level=info msg="connecting to shim 5d253b737f99fc7368b9756dd7c45f5a53b98c33d403d5bf0bbbf0c2af17cdb6" address="unix:///run/containerd/s/13ab2f0110fbdf668bdacf243137c3b88a07f72fe458db7b80d2d021af889634" protocol=ttrpc version=3 Sep 5 00:38:05.126049 systemd[1]: Started cri-containerd-5d253b737f99fc7368b9756dd7c45f5a53b98c33d403d5bf0bbbf0c2af17cdb6.scope - libcontainer container 5d253b737f99fc7368b9756dd7c45f5a53b98c33d403d5bf0bbbf0c2af17cdb6. Sep 5 00:38:05.175317 containerd[1560]: time="2025-09-05T00:38:05.174620646Z" level=info msg="StartContainer for \"5d253b737f99fc7368b9756dd7c45f5a53b98c33d403d5bf0bbbf0c2af17cdb6\" returns successfully" Sep 5 00:38:05.431202 systemd-networkd[1469]: calid7bc4611bce: Gained IPv6LL Sep 5 00:38:05.537280 kubelet[2717]: E0905 00:38:05.537200 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:05.537495 containerd[1560]: time="2025-09-05T00:38:05.537405194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cb865578-lg8kr,Uid:bf0a13b6-b355-41a8-a285-c37ee53d62ed,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:38:05.537984 containerd[1560]: time="2025-09-05T00:38:05.537787524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7xpz,Uid:fc7981ab-0414-4dff-966f-9c3d937d39c7,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:05.687052 systemd-networkd[1469]: cali3f9fe6ce0f4: Gained IPv6LL Sep 5 00:38:05.697431 kubelet[2717]: E0905 00:38:05.697388 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:05.705565 systemd-networkd[1469]: cali925eb762442: Link UP Sep 5 00:38:05.706912 systemd-networkd[1469]: cali925eb762442: Gained carrier Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.590 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0 calico-apiserver-55cb865578- calico-apiserver bf0a13b6-b355-41a8-a285-c37ee53d62ed 871 0 2025-09-05 00:37:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55cb865578 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55cb865578-lg8kr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali925eb762442 [] [] }} ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-lg8kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--lg8kr-" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.590 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-lg8kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.624 [INFO][4813] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" HandleID="k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Workload="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.624 [INFO][4813] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" HandleID="k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Workload="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55cb865578-lg8kr", "timestamp":"2025-09-05 00:38:05.624703745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.625 [INFO][4813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.625 [INFO][4813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.625 [INFO][4813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.634 [INFO][4813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.642 [INFO][4813] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.647 [INFO][4813] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.650 [INFO][4813] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.652 [INFO][4813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.652 [INFO][4813] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.654 [INFO][4813] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80 Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.664 [INFO][4813] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.698 [INFO][4813] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.698 [INFO][4813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" host="localhost" Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.698 [INFO][4813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:05.732919 containerd[1560]: 2025-09-05 00:38:05.698 [INFO][4813] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" HandleID="k8s-pod-network.1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Workload="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" Sep 5 00:38:05.733680 containerd[1560]: 2025-09-05 00:38:05.701 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-lg8kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0", GenerateName:"calico-apiserver-55cb865578-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf0a13b6-b355-41a8-a285-c37ee53d62ed", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cb865578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55cb865578-lg8kr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali925eb762442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:05.733680 containerd[1560]: 2025-09-05 00:38:05.702 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-lg8kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" Sep 5 00:38:05.733680 containerd[1560]: 2025-09-05 00:38:05.702 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali925eb762442 ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-lg8kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" Sep 5 00:38:05.733680 containerd[1560]: 2025-09-05 00:38:05.705 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-lg8kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" Sep 5 00:38:05.733680 containerd[1560]: 2025-09-05 00:38:05.707 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-lg8kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0", GenerateName:"calico-apiserver-55cb865578-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf0a13b6-b355-41a8-a285-c37ee53d62ed", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55cb865578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80", Pod:"calico-apiserver-55cb865578-lg8kr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali925eb762442", MAC:"92:f4:3e:1e:ca:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:05.733680 containerd[1560]: 2025-09-05 00:38:05.724 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" Namespace="calico-apiserver" Pod="calico-apiserver-55cb865578-lg8kr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55cb865578--lg8kr-eth0" Sep 5 00:38:05.800167 containerd[1560]: time="2025-09-05T00:38:05.799852720Z" level=info msg="connecting to shim 1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80" address="unix:///run/containerd/s/a1903c10001ebcdc6747bcf6ecf32a11afde52b52b21d8bbf6d38570c9c0c9e3" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:05.823074 systemd-networkd[1469]: cali7053ed27508: Link UP Sep 5 00:38:05.825470 systemd-networkd[1469]: cali7053ed27508: Gained carrier Sep 5 00:38:05.844376 systemd[1]: Started cri-containerd-1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80.scope - libcontainer container 1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80. Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.594 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0 coredns-7c65d6cfc9- kube-system fc7981ab-0414-4dff-966f-9c3d937d39c7 872 0 2025-09-05 00:37:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-g7xpz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7053ed27508 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7xpz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7xpz-" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.595 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7xpz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.631 [INFO][4819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" HandleID="k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Workload="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.631 [INFO][4819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" HandleID="k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Workload="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-g7xpz", "timestamp":"2025-09-05 00:38:05.631257339 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.631 [INFO][4819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.698 [INFO][4819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.698 [INFO][4819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.739 [INFO][4819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.750 [INFO][4819] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.765 [INFO][4819] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.770 [INFO][4819] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.776 [INFO][4819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.778 [INFO][4819] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.780 [INFO][4819] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0 Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.789 [INFO][4819] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.802 [INFO][4819] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.802 [INFO][4819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" host="localhost" Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.802 [INFO][4819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:05.849007 containerd[1560]: 2025-09-05 00:38:05.802 [INFO][4819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" HandleID="k8s-pod-network.f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Workload="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" Sep 5 00:38:05.849590 containerd[1560]: 2025-09-05 00:38:05.810 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7xpz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fc7981ab-0414-4dff-966f-9c3d937d39c7", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-g7xpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7053ed27508", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:05.849590 containerd[1560]: 2025-09-05 00:38:05.811 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7xpz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" Sep 5 00:38:05.849590 containerd[1560]: 2025-09-05 00:38:05.811 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7053ed27508 ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7xpz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" Sep 5 00:38:05.849590 containerd[1560]: 2025-09-05 00:38:05.826 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7xpz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" Sep 5 00:38:05.849590 containerd[1560]: 2025-09-05 00:38:05.829 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7xpz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fc7981ab-0414-4dff-966f-9c3d937d39c7", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0", Pod:"coredns-7c65d6cfc9-g7xpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7053ed27508", MAC:"1e:36:c4:f8:90:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:05.849590 containerd[1560]: 2025-09-05 00:38:05.843 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7xpz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g7xpz-eth0" Sep 5 00:38:05.892948 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:05.902284 containerd[1560]: time="2025-09-05T00:38:05.902208703Z" level=info msg="connecting to shim f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0" address="unix:///run/containerd/s/0dd93eeff8e25b2f3deb1a10dfdda58b9e7905bb0507470a73830398982bede3" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:05.936296 systemd[1]: Started cri-containerd-f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0.scope - libcontainer container f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0. Sep 5 00:38:05.956275 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:05.959599 containerd[1560]: time="2025-09-05T00:38:05.959446796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55cb865578-lg8kr,Uid:bf0a13b6-b355-41a8-a285-c37ee53d62ed,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80\"" Sep 5 00:38:06.000203 containerd[1560]: time="2025-09-05T00:38:06.000135479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7xpz,Uid:fc7981ab-0414-4dff-966f-9c3d937d39c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0\"" Sep 5 00:38:06.001601 kubelet[2717]: E0905 00:38:06.001567 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:06.005100 containerd[1560]: time="2025-09-05T00:38:06.005026927Z" level=info msg="CreateContainer within sandbox \"f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:38:06.015719 containerd[1560]: time="2025-09-05T00:38:06.015638144Z" level=info msg="Container 7af8b1a782e33f0e40db5fe55e2c7da8b0a66c6aed33187dc9ad2e96c7cfe5b6: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:06.027695 containerd[1560]: time="2025-09-05T00:38:06.027633705Z" level=info msg="CreateContainer within sandbox \"f9c946904fdea97447ad1394121d2c535167502754b32d31fa0d3c99044101f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7af8b1a782e33f0e40db5fe55e2c7da8b0a66c6aed33187dc9ad2e96c7cfe5b6\"" Sep 5 00:38:06.028388 containerd[1560]: time="2025-09-05T00:38:06.028324971Z" level=info msg="StartContainer for \"7af8b1a782e33f0e40db5fe55e2c7da8b0a66c6aed33187dc9ad2e96c7cfe5b6\"" Sep 5 00:38:06.029461 containerd[1560]: time="2025-09-05T00:38:06.029429234Z" level=info msg="connecting to shim 7af8b1a782e33f0e40db5fe55e2c7da8b0a66c6aed33187dc9ad2e96c7cfe5b6" address="unix:///run/containerd/s/0dd93eeff8e25b2f3deb1a10dfdda58b9e7905bb0507470a73830398982bede3" protocol=ttrpc version=3 Sep 5 00:38:06.056196 systemd[1]: Started cri-containerd-7af8b1a782e33f0e40db5fe55e2c7da8b0a66c6aed33187dc9ad2e96c7cfe5b6.scope - libcontainer container 7af8b1a782e33f0e40db5fe55e2c7da8b0a66c6aed33187dc9ad2e96c7cfe5b6. Sep 5 00:38:06.098917 containerd[1560]: time="2025-09-05T00:38:06.098740843Z" level=info msg="StartContainer for \"7af8b1a782e33f0e40db5fe55e2c7da8b0a66c6aed33187dc9ad2e96c7cfe5b6\" returns successfully" Sep 5 00:38:06.701194 kubelet[2717]: E0905 00:38:06.701140 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:06.703124 kubelet[2717]: E0905 00:38:06.703090 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:06.713937 kubelet[2717]: I0905 00:38:06.713464 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g7xpz" podStartSLOduration=42.713424863 podStartE2EDuration="42.713424863s" podCreationTimestamp="2025-09-05 00:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:06.713222837 +0000 UTC m=+48.285545760" watchObservedRunningTime="2025-09-05 00:38:06.713424863 +0000 UTC m=+48.285747776" Sep 5 00:38:06.774844 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:56960.service - OpenSSH per-connection server daemon (10.0.0.1:56960). Sep 5 00:38:06.855142 sshd[4980]: Accepted publickey for core from 10.0.0.1 port 56960 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:06.859326 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:06.873272 systemd-logind[1537]: New session 9 of user core. Sep 5 00:38:06.889231 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:38:07.221350 sshd[4984]: Connection closed by 10.0.0.1 port 56960 Sep 5 00:38:07.221799 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:07.226762 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:56960.service: Deactivated successfully. Sep 5 00:38:07.230971 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:38:07.234404 systemd-logind[1537]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:38:07.235748 systemd-logind[1537]: Removed session 9. Sep 5 00:38:07.543653 systemd-networkd[1469]: cali7053ed27508: Gained IPv6LL Sep 5 00:38:07.608099 systemd-networkd[1469]: cali925eb762442: Gained IPv6LL Sep 5 00:38:07.705411 kubelet[2717]: E0905 00:38:07.705350 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:07.903334 containerd[1560]: time="2025-09-05T00:38:07.903173942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:07.904385 containerd[1560]: time="2025-09-05T00:38:07.904336789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:38:07.906242 containerd[1560]: time="2025-09-05T00:38:07.906192584Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:07.909010 containerd[1560]: time="2025-09-05T00:38:07.908855570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:07.909640 containerd[1560]: time="2025-09-05T00:38:07.909589307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.829535016s" Sep 5 00:38:07.909640 containerd[1560]: time="2025-09-05T00:38:07.909621721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:38:07.910752 containerd[1560]: time="2025-09-05T00:38:07.910708009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:38:07.912468 containerd[1560]: time="2025-09-05T00:38:07.912422226Z" level=info msg="CreateContainer within sandbox \"8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:38:07.922828 containerd[1560]: time="2025-09-05T00:38:07.922767499Z" level=info msg="Container 60f54fc6d46d4e2d94f50e76e562b3012c40cc84df039c8145d5539d20b6ee2e: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:07.934542 containerd[1560]: time="2025-09-05T00:38:07.934468257Z" level=info msg="CreateContainer within sandbox \"8e0458ee0afa541c1d0482715f1a09dc0cf61d6d463239da5b3bf860893ade79\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"60f54fc6d46d4e2d94f50e76e562b3012c40cc84df039c8145d5539d20b6ee2e\"" Sep 5 00:38:07.935148 containerd[1560]: time="2025-09-05T00:38:07.935115825Z" level=info msg="StartContainer for \"60f54fc6d46d4e2d94f50e76e562b3012c40cc84df039c8145d5539d20b6ee2e\"" Sep 5 00:38:07.936237 containerd[1560]: time="2025-09-05T00:38:07.936203326Z" level=info msg="connecting to shim 60f54fc6d46d4e2d94f50e76e562b3012c40cc84df039c8145d5539d20b6ee2e" address="unix:///run/containerd/s/d67ce1165bfc7f3866b342374bc8812b91ffada7728ea9ebb69fd5ee970ad253" protocol=ttrpc version=3 Sep 5 00:38:07.962073 systemd[1]: Started cri-containerd-60f54fc6d46d4e2d94f50e76e562b3012c40cc84df039c8145d5539d20b6ee2e.scope - libcontainer container 60f54fc6d46d4e2d94f50e76e562b3012c40cc84df039c8145d5539d20b6ee2e. Sep 5 00:38:08.014604 containerd[1560]: time="2025-09-05T00:38:08.014540668Z" level=info msg="StartContainer for \"60f54fc6d46d4e2d94f50e76e562b3012c40cc84df039c8145d5539d20b6ee2e\" returns successfully" Sep 5 00:38:08.709636 kubelet[2717]: E0905 00:38:08.709245 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:08.772884 kubelet[2717]: I0905 00:38:08.772818 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55cb865578-qrpwz" podStartSLOduration=27.918883199 podStartE2EDuration="33.772801382s" podCreationTimestamp="2025-09-05 00:37:35 +0000 UTC" firstStartedPulling="2025-09-05 00:38:02.05671588 +0000 UTC m=+43.629038793" lastFinishedPulling="2025-09-05 00:38:07.910634063 +0000 UTC m=+49.482956976" observedRunningTime="2025-09-05 00:38:08.771758631 +0000 UTC m=+50.344081544" watchObservedRunningTime="2025-09-05 00:38:08.772801382 +0000 UTC m=+50.345124295" Sep 5 00:38:09.949257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278126972.mount: Deactivated successfully. Sep 5 00:38:10.088304 containerd[1560]: time="2025-09-05T00:38:10.088206896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:10.089211 containerd[1560]: time="2025-09-05T00:38:10.089155210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 00:38:10.090555 containerd[1560]: time="2025-09-05T00:38:10.090522945Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:10.093329 containerd[1560]: time="2025-09-05T00:38:10.093244135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:10.094057 containerd[1560]: time="2025-09-05T00:38:10.094014252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.183155046s" Sep 5 00:38:10.094057 containerd[1560]: time="2025-09-05T00:38:10.094050401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 00:38:10.095383 containerd[1560]: time="2025-09-05T00:38:10.095080787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:38:10.097755 containerd[1560]: time="2025-09-05T00:38:10.097705699Z" level=info msg="CreateContainer within sandbox \"6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:38:10.107292 containerd[1560]: time="2025-09-05T00:38:10.107221506Z" level=info msg="Container 6ec5b9f2373d1f0c09d4c4d78d3be041af164041364deb63918defe0c2606f7c: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:10.119532 containerd[1560]: time="2025-09-05T00:38:10.119454666Z" level=info msg="CreateContainer within sandbox \"6d12e72b61a889529de0b3062ab4791add6778a69e5da54447fc8272b44ccff5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6ec5b9f2373d1f0c09d4c4d78d3be041af164041364deb63918defe0c2606f7c\"" Sep 5 00:38:10.120078 containerd[1560]: time="2025-09-05T00:38:10.120045541Z" level=info msg="StartContainer for \"6ec5b9f2373d1f0c09d4c4d78d3be041af164041364deb63918defe0c2606f7c\"" Sep 5 00:38:10.121448 containerd[1560]: time="2025-09-05T00:38:10.121404748Z" level=info msg="connecting to shim 6ec5b9f2373d1f0c09d4c4d78d3be041af164041364deb63918defe0c2606f7c" address="unix:///run/containerd/s/f6ebe48d03ba851465407de817f426366c9d54c1f5c534dec7e670ffbaee59e0" protocol=ttrpc version=3 Sep 5 00:38:10.176262 systemd[1]: Started cri-containerd-6ec5b9f2373d1f0c09d4c4d78d3be041af164041364deb63918defe0c2606f7c.scope - libcontainer container 6ec5b9f2373d1f0c09d4c4d78d3be041af164041364deb63918defe0c2606f7c. Sep 5 00:38:10.259320 containerd[1560]: time="2025-09-05T00:38:10.259255041Z" level=info msg="StartContainer for \"6ec5b9f2373d1f0c09d4c4d78d3be041af164041364deb63918defe0c2606f7c\" returns successfully" Sep 5 00:38:10.896617 kubelet[2717]: I0905 00:38:10.896199 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f6c47d9dc-zzbkq" podStartSLOduration=3.540065111 podStartE2EDuration="11.89618077s" podCreationTimestamp="2025-09-05 00:37:59 +0000 UTC" firstStartedPulling="2025-09-05 00:38:01.738801578 +0000 UTC m=+43.311124491" lastFinishedPulling="2025-09-05 00:38:10.094917227 +0000 UTC m=+51.667240150" observedRunningTime="2025-09-05 00:38:10.89543446 +0000 UTC m=+52.467757373" watchObservedRunningTime="2025-09-05 00:38:10.89618077 +0000 UTC m=+52.468503673" Sep 5 00:38:12.234593 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:41578.service - OpenSSH per-connection server daemon (10.0.0.1:41578). Sep 5 00:38:12.640783 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 41578 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:12.643098 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:12.649610 systemd-logind[1537]: New session 10 of user core. Sep 5 00:38:12.655021 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:38:12.678162 containerd[1560]: time="2025-09-05T00:38:12.678101545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b\" id:\"64fdc3f1fa23ab72bca4bc5b6d167e21883138e520ec828a1e9528ec9b79e97e\" pid:5127 exited_at:{seconds:1757032692 nanos:677664891}" Sep 5 00:38:12.872120 sshd[5141]: Connection closed by 10.0.0.1 port 41578 Sep 5 00:38:12.872499 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:12.879487 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:41578.service: Deactivated successfully. Sep 5 00:38:12.883165 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:38:12.885202 systemd-logind[1537]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:38:12.886731 systemd-logind[1537]: Removed session 10. Sep 5 00:38:13.873391 containerd[1560]: time="2025-09-05T00:38:13.873315736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:13.879199 containerd[1560]: time="2025-09-05T00:38:13.879127193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:38:13.881753 containerd[1560]: time="2025-09-05T00:38:13.881687402Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:13.901000 containerd[1560]: time="2025-09-05T00:38:13.900950013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:13.901914 containerd[1560]: time="2025-09-05T00:38:13.901865972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.806755027s" Sep 5 00:38:13.901976 containerd[1560]: time="2025-09-05T00:38:13.901916551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:38:13.907629 containerd[1560]: time="2025-09-05T00:38:13.907561142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:38:13.939758 containerd[1560]: time="2025-09-05T00:38:13.939585160Z" level=info msg="CreateContainer within sandbox \"4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:38:14.217061 containerd[1560]: time="2025-09-05T00:38:14.217005304Z" level=info msg="Container b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:14.515357 containerd[1560]: time="2025-09-05T00:38:14.515119106Z" level=info msg="CreateContainer within sandbox \"4f20b7f55784d4ecc91dc02b6c9e58fe6442ab73d4caad33271f7e309a3c3d8c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\"" Sep 5 00:38:14.609052 containerd[1560]: time="2025-09-05T00:38:14.608953336Z" level=info msg="StartContainer for \"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\"" Sep 5 00:38:14.610528 containerd[1560]: time="2025-09-05T00:38:14.610472963Z" level=info msg="connecting to shim b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca" address="unix:///run/containerd/s/237e8b650eeee7ea1cb9dc47e0316e3afb354c9acacc803e9ca48157235766e2" protocol=ttrpc version=3 Sep 5 00:38:14.648226 systemd[1]: Started cri-containerd-b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca.scope - libcontainer container b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca. Sep 5 00:38:14.987120 containerd[1560]: time="2025-09-05T00:38:14.987069604Z" level=info msg="StartContainer for \"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\" returns successfully" Sep 5 00:38:15.148939 kubelet[2717]: I0905 00:38:15.148287 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54c46b8858-djlns" podStartSLOduration=27.11953288 podStartE2EDuration="37.148266145s" podCreationTimestamp="2025-09-05 00:37:38 +0000 UTC" firstStartedPulling="2025-09-05 00:38:03.878545656 +0000 UTC m=+45.450868569" lastFinishedPulling="2025-09-05 00:38:13.907278921 +0000 UTC m=+55.479601834" observedRunningTime="2025-09-05 00:38:15.147393643 +0000 UTC m=+56.719716556" watchObservedRunningTime="2025-09-05 00:38:15.148266145 +0000 UTC m=+56.720589068" Sep 5 00:38:16.137397 containerd[1560]: time="2025-09-05T00:38:16.137343012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\" id:\"5e2bb8d1e54eb17d2681dd10fabc50d15c6931ad1f7cfc05f2e1a0c1b3ccc965\" pid:5209 exited_at:{seconds:1757032696 nanos:114004956}" Sep 5 00:38:17.888316 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:41592.service - OpenSSH per-connection server daemon (10.0.0.1:41592). Sep 5 00:38:17.988639 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 41592 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:17.991046 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:17.997071 systemd-logind[1537]: New session 11 of user core. Sep 5 00:38:18.006159 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:38:18.177096 sshd[5226]: Connection closed by 10.0.0.1 port 41592 Sep 5 00:38:18.177362 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:18.189331 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:41592.service: Deactivated successfully. Sep 5 00:38:18.192022 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:38:18.192929 systemd-logind[1537]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:38:18.195604 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:41594.service - OpenSSH per-connection server daemon (10.0.0.1:41594). Sep 5 00:38:18.196930 systemd-logind[1537]: Removed session 11. Sep 5 00:38:18.564737 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 41594 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:18.566785 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:18.571787 systemd-logind[1537]: New session 12 of user core. Sep 5 00:38:18.582045 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:38:18.960357 sshd[5246]: Connection closed by 10.0.0.1 port 41594 Sep 5 00:38:18.960794 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:18.975374 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:41594.service: Deactivated successfully. Sep 5 00:38:18.978558 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:38:18.980379 systemd-logind[1537]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:38:18.984484 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:41604.service - OpenSSH per-connection server daemon (10.0.0.1:41604). Sep 5 00:38:18.986209 systemd-logind[1537]: Removed session 12. Sep 5 00:38:19.046247 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 41604 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:19.048516 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:19.054746 systemd-logind[1537]: New session 13 of user core. Sep 5 00:38:19.061076 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:38:19.109768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773583050.mount: Deactivated successfully. Sep 5 00:38:20.593073 containerd[1560]: time="2025-09-05T00:38:20.592968096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\" id:\"98435cfa5e53f073890ffd74046e9cc159146502133ad49868a6a5d61d6a1b58\" pid:5285 exited_at:{seconds:1757032700 nanos:592769216}" Sep 5 00:38:20.753636 sshd[5264]: Connection closed by 10.0.0.1 port 41604 Sep 5 00:38:20.754202 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:20.759203 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:41604.service: Deactivated successfully. Sep 5 00:38:20.761312 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:38:20.762452 systemd-logind[1537]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:38:20.763942 systemd-logind[1537]: Removed session 13. Sep 5 00:38:22.057254 containerd[1560]: time="2025-09-05T00:38:22.057170984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:22.058337 containerd[1560]: time="2025-09-05T00:38:22.058299597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:38:22.060250 containerd[1560]: time="2025-09-05T00:38:22.060207533Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:22.063229 containerd[1560]: time="2025-09-05T00:38:22.063147441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:22.064252 containerd[1560]: time="2025-09-05T00:38:22.064181688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.156581731s" Sep 5 00:38:22.064252 containerd[1560]: time="2025-09-05T00:38:22.064237122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:38:22.065948 containerd[1560]: time="2025-09-05T00:38:22.065882267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:38:22.067480 containerd[1560]: time="2025-09-05T00:38:22.067444719Z" level=info msg="CreateContainer within sandbox \"e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:38:22.100280 containerd[1560]: time="2025-09-05T00:38:22.100216109Z" level=info msg="Container b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:22.116933 containerd[1560]: time="2025-09-05T00:38:22.116827845Z" level=info msg="CreateContainer within sandbox \"e1698e30c1d597284087293a98b138162e499ceabebfb33d1ba0ba9a95eedb36\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3\"" Sep 5 00:38:22.117850 containerd[1560]: time="2025-09-05T00:38:22.117791591Z" level=info msg="StartContainer for \"b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3\"" Sep 5 00:38:22.119115 containerd[1560]: time="2025-09-05T00:38:22.119060155Z" level=info msg="connecting to shim b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3" address="unix:///run/containerd/s/8caabba9d4a3c385fb1dbe472a3cf41ea576184d200d53ccca001e5908c47a8b" protocol=ttrpc version=3 Sep 5 00:38:22.148188 systemd[1]: Started cri-containerd-b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3.scope - libcontainer container b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3. Sep 5 00:38:22.325304 containerd[1560]: time="2025-09-05T00:38:22.324924525Z" level=info msg="StartContainer for \"b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3\" returns successfully" Sep 5 00:38:23.181136 containerd[1560]: time="2025-09-05T00:38:23.181080068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3\" id:\"61799d8d5ac3e935d64e515032eaf71963c8e2acbab719788142f0a90becb9d7\" pid:5361 exit_status:1 exited_at:{seconds:1757032703 nanos:180715027}" Sep 5 00:38:24.310592 containerd[1560]: time="2025-09-05T00:38:24.310530069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3\" id:\"e8c3889bf2b2e0019eb6353cc3488c5b6c606335927b2192cebe94e5e699acaa\" pid:5388 exit_status:1 exited_at:{seconds:1757032704 nanos:310195806}" Sep 5 00:38:25.772108 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:60446.service - OpenSSH per-connection server daemon (10.0.0.1:60446). Sep 5 00:38:25.855664 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 60446 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:25.857683 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:25.862237 systemd-logind[1537]: New session 14 of user core. Sep 5 00:38:25.875050 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:38:26.027860 containerd[1560]: time="2025-09-05T00:38:26.027692377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:26.089247 containerd[1560]: time="2025-09-05T00:38:26.089111082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:38:26.132733 containerd[1560]: time="2025-09-05T00:38:26.132656759Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:26.150787 sshd[5409]: Connection closed by 10.0.0.1 port 60446 Sep 5 00:38:26.157784 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:60446.service: Deactivated successfully. Sep 5 00:38:26.151234 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:26.160290 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:38:26.161472 systemd-logind[1537]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:38:26.163047 systemd-logind[1537]: Removed session 14. Sep 5 00:38:26.345964 containerd[1560]: time="2025-09-05T00:38:26.345739127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:26.346697 containerd[1560]: time="2025-09-05T00:38:26.346639702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 4.280689759s" Sep 5 00:38:26.346697 containerd[1560]: time="2025-09-05T00:38:26.346680418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:38:26.348009 containerd[1560]: time="2025-09-05T00:38:26.347963019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:38:26.349170 containerd[1560]: time="2025-09-05T00:38:26.349128891Z" level=info msg="CreateContainer within sandbox \"c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:38:27.053668 containerd[1560]: time="2025-09-05T00:38:27.053610286Z" level=info msg="Container 3d33b63f6b36aa7c076b00ca6e9c5491e64aafdba9498fda271b014fd1e79d72: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:27.471120 containerd[1560]: time="2025-09-05T00:38:27.471046336Z" level=info msg="CreateContainer within sandbox \"c3aa228dda805e61ba305403e7d959d23896bad36774a3df47ac8e4d45373696\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3d33b63f6b36aa7c076b00ca6e9c5491e64aafdba9498fda271b014fd1e79d72\"" Sep 5 00:38:27.471855 containerd[1560]: time="2025-09-05T00:38:27.471799767Z" level=info msg="StartContainer for \"3d33b63f6b36aa7c076b00ca6e9c5491e64aafdba9498fda271b014fd1e79d72\"" Sep 5 00:38:27.473443 containerd[1560]: time="2025-09-05T00:38:27.473406568Z" level=info msg="connecting to shim 3d33b63f6b36aa7c076b00ca6e9c5491e64aafdba9498fda271b014fd1e79d72" address="unix:///run/containerd/s/13ab2f0110fbdf668bdacf243137c3b88a07f72fe458db7b80d2d021af889634" protocol=ttrpc version=3 Sep 5 00:38:27.499110 systemd[1]: Started cri-containerd-3d33b63f6b36aa7c076b00ca6e9c5491e64aafdba9498fda271b014fd1e79d72.scope - libcontainer container 3d33b63f6b36aa7c076b00ca6e9c5491e64aafdba9498fda271b014fd1e79d72. Sep 5 00:38:27.719054 containerd[1560]: time="2025-09-05T00:38:27.718996838Z" level=info msg="StartContainer for \"3d33b63f6b36aa7c076b00ca6e9c5491e64aafdba9498fda271b014fd1e79d72\" returns successfully" Sep 5 00:38:27.815532 containerd[1560]: time="2025-09-05T00:38:27.815405828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\" id:\"d44cb4ac1dc28b925d0feb634c29b15c23223906757be2a9057d97ac1d69bc2a\" pid:5466 exited_at:{seconds:1757032707 nanos:815042817}" Sep 5 00:38:28.231162 kubelet[2717]: I0905 00:38:28.230943 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sr49w" podStartSLOduration=25.878762145 podStartE2EDuration="50.230926029s" podCreationTimestamp="2025-09-05 00:37:38 +0000 UTC" firstStartedPulling="2025-09-05 00:38:01.99542826 +0000 UTC m=+43.567751173" lastFinishedPulling="2025-09-05 00:38:26.347592144 +0000 UTC m=+67.919915057" observedRunningTime="2025-09-05 00:38:28.219282095 +0000 UTC m=+69.791605008" watchObservedRunningTime="2025-09-05 00:38:28.230926029 +0000 UTC m=+69.803248942" Sep 5 00:38:28.231831 kubelet[2717]: I0905 00:38:28.231201 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-vmwjg" podStartSLOduration=33.252206345 podStartE2EDuration="51.231196637s" podCreationTimestamp="2025-09-05 00:37:37 +0000 UTC" firstStartedPulling="2025-09-05 00:38:04.086719734 +0000 UTC m=+45.659042647" lastFinishedPulling="2025-09-05 00:38:22.065710026 +0000 UTC m=+63.638032939" observedRunningTime="2025-09-05 00:38:23.282604423 +0000 UTC m=+64.854927336" watchObservedRunningTime="2025-09-05 00:38:28.231196637 +0000 UTC m=+69.803519550" Sep 5 00:38:28.451078 containerd[1560]: time="2025-09-05T00:38:28.451017093Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:28.500731 containerd[1560]: time="2025-09-05T00:38:28.500378293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:38:28.502911 containerd[1560]: time="2025-09-05T00:38:28.502844027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.154846585s" Sep 5 00:38:28.502911 containerd[1560]: time="2025-09-05T00:38:28.502882700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:38:28.508183 containerd[1560]: time="2025-09-05T00:38:28.508125089Z" level=info msg="CreateContainer within sandbox \"1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:38:28.656529 kubelet[2717]: I0905 00:38:28.656485 2717 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:38:28.656529 kubelet[2717]: I0905 00:38:28.656533 2717 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:38:28.850572 containerd[1560]: time="2025-09-05T00:38:28.850428953Z" level=info msg="Container c9d464e1a580928087feb4059621eb0fabbab96794bb2a7db347382024d6b3e0: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:29.299340 containerd[1560]: time="2025-09-05T00:38:29.299256231Z" level=info msg="CreateContainer within sandbox \"1c1758dfe9df2bf2c835a86f5951ea0b918a8265195d7d1a77771ff512ce6a80\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c9d464e1a580928087feb4059621eb0fabbab96794bb2a7db347382024d6b3e0\"" Sep 5 00:38:29.299879 containerd[1560]: time="2025-09-05T00:38:29.299838674Z" level=info msg="StartContainer for \"c9d464e1a580928087feb4059621eb0fabbab96794bb2a7db347382024d6b3e0\"" Sep 5 00:38:29.301218 containerd[1560]: time="2025-09-05T00:38:29.301193727Z" level=info msg="connecting to shim c9d464e1a580928087feb4059621eb0fabbab96794bb2a7db347382024d6b3e0" address="unix:///run/containerd/s/a1903c10001ebcdc6747bcf6ecf32a11afde52b52b21d8bbf6d38570c9c0c9e3" protocol=ttrpc version=3 Sep 5 00:38:29.324087 systemd[1]: Started cri-containerd-c9d464e1a580928087feb4059621eb0fabbab96794bb2a7db347382024d6b3e0.scope - libcontainer container c9d464e1a580928087feb4059621eb0fabbab96794bb2a7db347382024d6b3e0. Sep 5 00:38:29.744075 containerd[1560]: time="2025-09-05T00:38:29.744033248Z" level=info msg="StartContainer for \"c9d464e1a580928087feb4059621eb0fabbab96794bb2a7db347382024d6b3e0\" returns successfully" Sep 5 00:38:31.169986 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:42414.service - OpenSSH per-connection server daemon (10.0.0.1:42414). Sep 5 00:38:31.234178 sshd[5520]: Accepted publickey for core from 10.0.0.1 port 42414 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:31.236754 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:31.241188 systemd-logind[1537]: New session 15 of user core. Sep 5 00:38:31.251062 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:38:31.470001 sshd[5523]: Connection closed by 10.0.0.1 port 42414 Sep 5 00:38:31.470345 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:31.475344 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:42414.service: Deactivated successfully. Sep 5 00:38:31.477516 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:38:31.478585 systemd-logind[1537]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:38:31.479820 systemd-logind[1537]: Removed session 15. Sep 5 00:38:35.288566 kubelet[2717]: I0905 00:38:35.288467 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55cb865578-lg8kr" podStartSLOduration=37.747795318 podStartE2EDuration="1m0.288434227s" podCreationTimestamp="2025-09-05 00:37:35 +0000 UTC" firstStartedPulling="2025-09-05 00:38:05.962933474 +0000 UTC m=+47.535256377" lastFinishedPulling="2025-09-05 00:38:28.503572373 +0000 UTC m=+70.075895286" observedRunningTime="2025-09-05 00:38:30.384444152 +0000 UTC m=+71.956767085" watchObservedRunningTime="2025-09-05 00:38:35.288434227 +0000 UTC m=+76.860757140" Sep 5 00:38:36.488183 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:42418.service - OpenSSH per-connection server daemon (10.0.0.1:42418). Sep 5 00:38:36.555763 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 42418 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:36.558042 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:36.563252 systemd-logind[1537]: New session 16 of user core. Sep 5 00:38:36.570043 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:38:36.723174 sshd[5546]: Connection closed by 10.0.0.1 port 42418 Sep 5 00:38:36.723677 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:36.729664 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:42418.service: Deactivated successfully. Sep 5 00:38:36.732289 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:38:36.733768 systemd-logind[1537]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:38:36.735939 systemd-logind[1537]: Removed session 16. Sep 5 00:38:41.736135 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:37418.service - OpenSSH per-connection server daemon (10.0.0.1:37418). Sep 5 00:38:41.826084 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 37418 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:41.827947 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:41.835197 systemd-logind[1537]: New session 17 of user core. Sep 5 00:38:41.845016 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:38:42.161795 sshd[5564]: Connection closed by 10.0.0.1 port 37418 Sep 5 00:38:42.162072 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:42.167577 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:37418.service: Deactivated successfully. Sep 5 00:38:42.170179 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:38:42.170182 systemd-logind[1537]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:38:42.172599 systemd-logind[1537]: Removed session 17. Sep 5 00:38:42.664563 containerd[1560]: time="2025-09-05T00:38:42.664367347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b\" id:\"7b9554a1578ff11dae6dd215f110eb30cad157f28e36bc8acef593d24ccfe033\" pid:5593 exited_at:{seconds:1757032722 nanos:663259157}" Sep 5 00:38:45.537128 kubelet[2717]: E0905 00:38:45.537067 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:47.175405 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:37426.service - OpenSSH per-connection server daemon (10.0.0.1:37426). Sep 5 00:38:47.227714 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 37426 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:47.229332 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:47.233981 systemd-logind[1537]: New session 18 of user core. Sep 5 00:38:47.250198 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:38:47.436875 sshd[5612]: Connection closed by 10.0.0.1 port 37426 Sep 5 00:38:47.437242 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:47.443258 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:37426.service: Deactivated successfully. Sep 5 00:38:47.446321 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:38:47.448827 systemd-logind[1537]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:38:47.450604 systemd-logind[1537]: Removed session 18. Sep 5 00:38:50.538922 kubelet[2717]: E0905 00:38:50.537766 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:50.665486 containerd[1560]: time="2025-09-05T00:38:50.665406317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\" id:\"c1b1b53e0ed74fe1ac4f4ec7ff7fcfdfb81b4cf286cff157b4410cf9fc9d3c09\" pid:5637 exited_at:{seconds:1757032730 nanos:664691256}" Sep 5 00:38:50.710593 containerd[1560]: time="2025-09-05T00:38:50.710530701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3\" id:\"9ea510791d010da7e239e3ce6afef5eb9c4981965d34f1c9f76ea5e6ab9787fe\" pid:5655 exited_at:{seconds:1757032730 nanos:709986525}" Sep 5 00:38:52.459169 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:52738.service - OpenSSH per-connection server daemon (10.0.0.1:52738). Sep 5 00:38:52.537408 kubelet[2717]: E0905 00:38:52.537352 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:52.542847 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 52738 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:52.546223 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:52.552245 systemd-logind[1537]: New session 19 of user core. Sep 5 00:38:52.565324 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:38:52.949088 sshd[5677]: Connection closed by 10.0.0.1 port 52738 Sep 5 00:38:52.950455 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:52.955807 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:52738.service: Deactivated successfully. Sep 5 00:38:52.958254 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:38:52.960415 systemd-logind[1537]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:38:52.962519 systemd-logind[1537]: Removed session 19. Sep 5 00:38:57.536910 kubelet[2717]: E0905 00:38:57.536837 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:57.972502 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:52744.service - OpenSSH per-connection server daemon (10.0.0.1:52744). Sep 5 00:38:58.059196 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 52744 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:38:58.061015 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:58.069371 systemd-logind[1537]: New session 20 of user core. Sep 5 00:38:58.077396 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:38:58.326951 sshd[5696]: Connection closed by 10.0.0.1 port 52744 Sep 5 00:38:58.327167 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:58.334540 systemd-logind[1537]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:38:58.334743 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:52744.service: Deactivated successfully. Sep 5 00:38:58.338206 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:38:58.340424 systemd-logind[1537]: Removed session 20. Sep 5 00:39:03.339962 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:49838.service - OpenSSH per-connection server daemon (10.0.0.1:49838). Sep 5 00:39:03.406630 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 49838 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:03.408559 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:03.414468 systemd-logind[1537]: New session 21 of user core. Sep 5 00:39:03.418034 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:39:03.553300 sshd[5712]: Connection closed by 10.0.0.1 port 49838 Sep 5 00:39:03.553669 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:03.562870 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:49838.service: Deactivated successfully. Sep 5 00:39:03.566048 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:39:03.567018 systemd-logind[1537]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:39:03.571002 systemd-logind[1537]: Removed session 21. Sep 5 00:39:03.572824 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:49844.service - OpenSSH per-connection server daemon (10.0.0.1:49844). Sep 5 00:39:03.633086 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 49844 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:03.634735 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:03.640608 systemd-logind[1537]: New session 22 of user core. Sep 5 00:39:03.649057 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:39:04.844551 sshd[5728]: Connection closed by 10.0.0.1 port 49844 Sep 5 00:39:04.845091 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:04.857955 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:49844.service: Deactivated successfully. Sep 5 00:39:04.860193 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:39:04.861217 systemd-logind[1537]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:39:04.864665 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:49852.service - OpenSSH per-connection server daemon (10.0.0.1:49852). Sep 5 00:39:04.865992 systemd-logind[1537]: Removed session 22. Sep 5 00:39:04.961071 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 49852 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:04.962994 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:04.967788 systemd-logind[1537]: New session 23 of user core. Sep 5 00:39:04.978086 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:39:07.497131 sshd[5742]: Connection closed by 10.0.0.1 port 49852 Sep 5 00:39:07.497778 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:07.510521 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:49852.service: Deactivated successfully. Sep 5 00:39:07.513620 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:39:07.514522 systemd[1]: session-23.scope: Consumed 701ms CPU time, 73.9M memory peak. Sep 5 00:39:07.517476 systemd-logind[1537]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:39:07.523210 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:49854.service - OpenSSH per-connection server daemon (10.0.0.1:49854). Sep 5 00:39:07.525492 systemd-logind[1537]: Removed session 23. Sep 5 00:39:07.580730 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 49854 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:07.582721 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:07.589158 systemd-logind[1537]: New session 24 of user core. Sep 5 00:39:07.605201 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:39:08.363101 sshd[5785]: Connection closed by 10.0.0.1 port 49854 Sep 5 00:39:08.364305 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:08.376422 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:49854.service: Deactivated successfully. Sep 5 00:39:08.380504 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:39:08.383322 systemd-logind[1537]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:39:08.385822 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:49856.service - OpenSSH per-connection server daemon (10.0.0.1:49856). Sep 5 00:39:08.387782 systemd-logind[1537]: Removed session 24. Sep 5 00:39:08.473328 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 49856 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:08.475735 sshd-session[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:08.482984 systemd-logind[1537]: New session 25 of user core. Sep 5 00:39:08.487091 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:39:08.651510 sshd[5800]: Connection closed by 10.0.0.1 port 49856 Sep 5 00:39:08.652466 sshd-session[5797]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:08.657387 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:49856.service: Deactivated successfully. Sep 5 00:39:08.660684 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:39:08.664373 systemd-logind[1537]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:39:08.666256 systemd-logind[1537]: Removed session 25. Sep 5 00:39:12.690829 containerd[1560]: time="2025-09-05T00:39:12.690774306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c364127f88db9f9a27639f3619c8b7a054c608174db95306f5480e5e5d4c876b\" id:\"e0627af449d8fbf02b993a4d1225e9da38608743d7121ffbd6e4ced651f19651\" pid:5823 exited_at:{seconds:1757032752 nanos:690222148}" Sep 5 00:39:13.673753 systemd[1]: Started sshd@25-10.0.0.115:22-10.0.0.1:38260.service - OpenSSH per-connection server daemon (10.0.0.1:38260). Sep 5 00:39:13.727382 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 38260 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:13.729263 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:13.733560 systemd-logind[1537]: New session 26 of user core. Sep 5 00:39:13.748119 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:39:13.887039 sshd[5841]: Connection closed by 10.0.0.1 port 38260 Sep 5 00:39:13.888305 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:13.893669 systemd[1]: sshd@25-10.0.0.115:22-10.0.0.1:38260.service: Deactivated successfully. Sep 5 00:39:13.896244 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:39:13.897493 systemd-logind[1537]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:39:13.899139 systemd-logind[1537]: Removed session 26. Sep 5 00:39:14.537576 kubelet[2717]: E0905 00:39:14.537201 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:18.902398 systemd[1]: Started sshd@26-10.0.0.115:22-10.0.0.1:38270.service - OpenSSH per-connection server daemon (10.0.0.1:38270). Sep 5 00:39:18.960387 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 38270 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:18.962284 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:18.967232 systemd-logind[1537]: New session 27 of user core. Sep 5 00:39:18.976154 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:39:19.124168 sshd[5862]: Connection closed by 10.0.0.1 port 38270 Sep 5 00:39:19.125207 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:19.130933 systemd[1]: sshd@26-10.0.0.115:22-10.0.0.1:38270.service: Deactivated successfully. Sep 5 00:39:19.133863 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:39:19.135625 systemd-logind[1537]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:39:19.138507 systemd-logind[1537]: Removed session 27. Sep 5 00:39:20.603504 containerd[1560]: time="2025-09-05T00:39:20.603309928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\" id:\"f731613b3edace55b1f49cf86a740b1a08f90adab07e4ff66392aed0607048b8\" pid:5886 exited_at:{seconds:1757032760 nanos:602830959}" Sep 5 00:39:20.768936 containerd[1560]: time="2025-09-05T00:39:20.768860495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3\" id:\"25e9e983a627352183c789d58246aac44bebad294e3d30dc7e502486445ad76a\" pid:5904 exited_at:{seconds:1757032760 nanos:768453564}" Sep 5 00:39:21.537691 kubelet[2717]: E0905 00:39:21.537620 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:22.349264 containerd[1560]: time="2025-09-05T00:39:22.349176394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7bde6379c43f5837445d558357d06e4ed039fb90fbab79553a29a4d090ac6b3\" id:\"275d8e60d0c3852419dfbc744a60dcc871f362fe8401483955842647731d1957\" pid:5938 exited_at:{seconds:1757032762 nanos:348564339}" Sep 5 00:39:23.537776 kubelet[2717]: E0905 00:39:23.537716 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:24.139155 systemd[1]: Started sshd@27-10.0.0.115:22-10.0.0.1:52984.service - OpenSSH per-connection server daemon (10.0.0.1:52984). Sep 5 00:39:24.209365 sshd[5951]: Accepted publickey for core from 10.0.0.1 port 52984 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:24.211191 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:24.216640 systemd-logind[1537]: New session 28 of user core. Sep 5 00:39:24.222095 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 00:39:24.347230 sshd[5956]: Connection closed by 10.0.0.1 port 52984 Sep 5 00:39:24.347625 sshd-session[5951]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:24.352071 systemd[1]: sshd@27-10.0.0.115:22-10.0.0.1:52984.service: Deactivated successfully. Sep 5 00:39:24.354769 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 00:39:24.355697 systemd-logind[1537]: Session 28 logged out. Waiting for processes to exit. Sep 5 00:39:24.358685 systemd-logind[1537]: Removed session 28. Sep 5 00:39:27.818830 containerd[1560]: time="2025-09-05T00:39:27.818784654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70626487221542699d490b4a60452c37ad638e1f04f88750be07e7288493bca\" id:\"0c8922eba34bafef6051db848d3af54f4d881bb82191bc0c93b146c8b4d0876a\" pid:5986 exited_at:{seconds:1757032767 nanos:818515247}" Sep 5 00:39:29.363962 systemd[1]: Started sshd@28-10.0.0.115:22-10.0.0.1:52986.service - OpenSSH per-connection server daemon (10.0.0.1:52986). Sep 5 00:39:29.417537 sshd[5997]: Accepted publickey for core from 10.0.0.1 port 52986 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:39:29.419050 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:29.424164 systemd-logind[1537]: New session 29 of user core. Sep 5 00:39:29.434198 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 5 00:39:29.551990 sshd[6000]: Connection closed by 10.0.0.1 port 52986 Sep 5 00:39:29.552341 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:29.556696 systemd[1]: sshd@28-10.0.0.115:22-10.0.0.1:52986.service: Deactivated successfully. Sep 5 00:39:29.558852 systemd[1]: session-29.scope: Deactivated successfully. Sep 5 00:39:29.559734 systemd-logind[1537]: Session 29 logged out. Waiting for processes to exit. Sep 5 00:39:29.561457 systemd-logind[1537]: Removed session 29.