Jul 9 13:00:54.857585 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 08:38:39 -00 2025 Jul 9 13:00:54.857608 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:00:54.857617 kernel: BIOS-provided physical RAM map: Jul 9 13:00:54.857623 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 9 13:00:54.857630 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 9 13:00:54.857636 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 9 13:00:54.857644 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 9 13:00:54.857653 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 9 13:00:54.857663 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 9 13:00:54.857670 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 9 13:00:54.857676 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 9 13:00:54.857683 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 9 13:00:54.857689 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 9 13:00:54.857696 kernel: NX (Execute Disable) protection: active Jul 9 13:00:54.857706 kernel: APIC: Static calls initialized Jul 9 13:00:54.857713 kernel: SMBIOS 2.8 present. Jul 9 13:00:54.857724 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 9 13:00:54.857731 kernel: DMI: Memory slots populated: 1/1 Jul 9 13:00:54.857738 kernel: Hypervisor detected: KVM Jul 9 13:00:54.857745 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 9 13:00:54.857752 kernel: kvm-clock: using sched offset of 4490634761 cycles Jul 9 13:00:54.857760 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 13:00:54.857767 kernel: tsc: Detected 2794.746 MHz processor Jul 9 13:00:54.857777 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 13:00:54.857784 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 13:00:54.857792 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 9 13:00:54.857799 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 9 13:00:54.857806 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 13:00:54.857813 kernel: Using GB pages for direct mapping Jul 9 13:00:54.857821 kernel: ACPI: Early table checksum verification disabled Jul 9 13:00:54.857828 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 9 13:00:54.857835 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:00:54.857845 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:00:54.857852 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:00:54.857860 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 9 13:00:54.857867 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:00:54.857874 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:00:54.857881 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:00:54.857888 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:00:54.857896 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 9 13:00:54.857909 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 9 13:00:54.857916 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 9 13:00:54.857923 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 9 13:00:54.857931 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 9 13:00:54.857938 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 9 13:00:54.857946 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 9 13:00:54.857955 kernel: No NUMA configuration found Jul 9 13:00:54.857962 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 9 13:00:54.857970 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 9 13:00:54.857977 kernel: Zone ranges: Jul 9 13:00:54.857985 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 13:00:54.857992 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 9 13:00:54.857999 kernel: Normal empty Jul 9 13:00:54.858007 kernel: Device empty Jul 9 13:00:54.858014 kernel: Movable zone start for each node Jul 9 13:00:54.858024 kernel: Early memory node ranges Jul 9 13:00:54.858031 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 9 13:00:54.858038 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 9 13:00:54.858046 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 9 13:00:54.858053 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 13:00:54.858060 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 9 13:00:54.858070 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 9 13:00:54.858078 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 9 13:00:54.858087 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 9 13:00:54.858095 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 9 13:00:54.858104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 9 13:00:54.858112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 9 13:00:54.858121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 9 13:00:54.858129 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 9 13:00:54.858136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 9 13:00:54.858143 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 13:00:54.858151 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 9 13:00:54.858158 kernel: TSC deadline timer available Jul 9 13:00:54.858165 kernel: CPU topo: Max. logical packages: 1 Jul 9 13:00:54.858175 kernel: CPU topo: Max. logical dies: 1 Jul 9 13:00:54.858183 kernel: CPU topo: Max. dies per package: 1 Jul 9 13:00:54.858190 kernel: CPU topo: Max. threads per core: 1 Jul 9 13:00:54.858197 kernel: CPU topo: Num. cores per package: 4 Jul 9 13:00:54.858204 kernel: CPU topo: Num. threads per package: 4 Jul 9 13:00:54.858212 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 9 13:00:54.858219 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 9 13:00:54.858226 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 9 13:00:54.858234 kernel: kvm-guest: setup PV sched yield Jul 9 13:00:54.858243 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 9 13:00:54.858251 kernel: Booting paravirtualized kernel on KVM Jul 9 13:00:54.858258 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 13:00:54.858266 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 9 13:00:54.858273 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 9 13:00:54.858281 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 9 13:00:54.858288 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 9 13:00:54.858295 kernel: kvm-guest: PV spinlocks enabled Jul 9 13:00:54.858303 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 9 13:00:54.858313 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:00:54.858321 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 13:00:54.858329 kernel: random: crng init done Jul 9 13:00:54.858336 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 13:00:54.858344 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 13:00:54.858364 kernel: Fallback order for Node 0: 0 Jul 9 13:00:54.858384 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 9 13:00:54.858391 kernel: Policy zone: DMA32 Jul 9 13:00:54.858398 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 13:00:54.858409 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 13:00:54.858417 kernel: ftrace: allocating 40097 entries in 157 pages Jul 9 13:00:54.858424 kernel: ftrace: allocated 157 pages with 5 groups Jul 9 13:00:54.858431 kernel: Dynamic Preempt: voluntary Jul 9 13:00:54.858438 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 13:00:54.858447 kernel: rcu: RCU event tracing is enabled. Jul 9 13:00:54.858454 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 13:00:54.858462 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 13:00:54.858472 kernel: Rude variant of Tasks RCU enabled. Jul 9 13:00:54.858482 kernel: Tracing variant of Tasks RCU enabled. Jul 9 13:00:54.858490 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 13:00:54.858497 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 13:00:54.858505 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 13:00:54.858512 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 13:00:54.858520 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 13:00:54.858527 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 9 13:00:54.858535 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 13:00:54.858552 kernel: Console: colour VGA+ 80x25 Jul 9 13:00:54.858560 kernel: printk: legacy console [ttyS0] enabled Jul 9 13:00:54.858575 kernel: ACPI: Core revision 20240827 Jul 9 13:00:54.858584 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 9 13:00:54.858594 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 13:00:54.858602 kernel: x2apic enabled Jul 9 13:00:54.858609 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 13:00:54.858619 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 9 13:00:54.858627 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 9 13:00:54.858637 kernel: kvm-guest: setup PV IPIs Jul 9 13:00:54.858645 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 13:00:54.858653 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 9 13:00:54.858661 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 9 13:00:54.858668 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 9 13:00:54.858676 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 9 13:00:54.858684 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 9 13:00:54.858691 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 13:00:54.858701 kernel: Spectre V2 : Mitigation: Retpolines Jul 9 13:00:54.858709 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 9 13:00:54.858717 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 9 13:00:54.858724 kernel: RETBleed: Mitigation: untrained return thunk Jul 9 13:00:54.858732 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 9 13:00:54.858740 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 9 13:00:54.858748 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 9 13:00:54.858756 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 9 13:00:54.858764 kernel: x86/bugs: return thunk changed Jul 9 13:00:54.858773 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 9 13:00:54.858781 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 9 13:00:54.858789 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 9 13:00:54.858797 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 9 13:00:54.858804 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 9 13:00:54.858812 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 9 13:00:54.858820 kernel: Freeing SMP alternatives memory: 32K Jul 9 13:00:54.858827 kernel: pid_max: default: 32768 minimum: 301 Jul 9 13:00:54.858837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 13:00:54.858845 kernel: landlock: Up and running. Jul 9 13:00:54.858852 kernel: SELinux: Initializing. Jul 9 13:00:54.858860 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 13:00:54.858870 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 13:00:54.858878 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 9 13:00:54.858885 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 9 13:00:54.858893 kernel: ... version: 0 Jul 9 13:00:54.858901 kernel: ... bit width: 48 Jul 9 13:00:54.858911 kernel: ... generic registers: 6 Jul 9 13:00:54.858918 kernel: ... value mask: 0000ffffffffffff Jul 9 13:00:54.858926 kernel: ... max period: 00007fffffffffff Jul 9 13:00:54.858933 kernel: ... fixed-purpose events: 0 Jul 9 13:00:54.858941 kernel: ... event mask: 000000000000003f Jul 9 13:00:54.858948 kernel: signal: max sigframe size: 1776 Jul 9 13:00:54.858956 kernel: rcu: Hierarchical SRCU implementation. Jul 9 13:00:54.858964 kernel: rcu: Max phase no-delay instances is 400. Jul 9 13:00:54.858971 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 13:00:54.858979 kernel: smp: Bringing up secondary CPUs ... Jul 9 13:00:54.858989 kernel: smpboot: x86: Booting SMP configuration: Jul 9 13:00:54.858997 kernel: .... node #0, CPUs: #1 #2 #3 Jul 9 13:00:54.859004 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 13:00:54.859012 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 9 13:00:54.859020 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54568K init, 2400K bss, 136904K reserved, 0K cma-reserved) Jul 9 13:00:54.859028 kernel: devtmpfs: initialized Jul 9 13:00:54.859035 kernel: x86/mm: Memory block size: 128MB Jul 9 13:00:54.859043 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 13:00:54.859051 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 13:00:54.859062 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 13:00:54.859071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 13:00:54.859082 kernel: audit: initializing netlink subsys (disabled) Jul 9 13:00:54.859092 kernel: audit: type=2000 audit(1752066052.038:1): state=initialized audit_enabled=0 res=1 Jul 9 13:00:54.859100 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 13:00:54.859107 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 13:00:54.859115 kernel: cpuidle: using governor menu Jul 9 13:00:54.859122 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 13:00:54.859130 kernel: dca service started, version 1.12.1 Jul 9 13:00:54.859140 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 9 13:00:54.859148 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 9 13:00:54.859155 kernel: PCI: Using configuration type 1 for base access Jul 9 13:00:54.859163 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 13:00:54.859171 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 13:00:54.859178 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 13:00:54.859186 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 13:00:54.859194 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 13:00:54.859204 kernel: ACPI: Added _OSI(Module Device) Jul 9 13:00:54.859211 kernel: ACPI: Added _OSI(Processor Device) Jul 9 13:00:54.859219 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 13:00:54.859227 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 13:00:54.859234 kernel: ACPI: Interpreter enabled Jul 9 13:00:54.859242 kernel: ACPI: PM: (supports S0 S3 S5) Jul 9 13:00:54.859250 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 13:00:54.859257 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 13:00:54.859265 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 13:00:54.859273 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 9 13:00:54.859282 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 13:00:54.859500 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 13:00:54.859643 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 9 13:00:54.859766 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 9 13:00:54.859776 kernel: PCI host bridge to bus 0000:00 Jul 9 13:00:54.859909 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 13:00:54.860026 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 9 13:00:54.860141 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 13:00:54.860251 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 9 13:00:54.860377 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 9 13:00:54.860529 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 9 13:00:54.860675 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 13:00:54.860843 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 9 13:00:54.860987 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 9 13:00:54.861110 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 9 13:00:54.861231 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 9 13:00:54.861366 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 9 13:00:54.861507 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 13:00:54.861694 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 13:00:54.861848 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 9 13:00:54.862014 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 9 13:00:54.862141 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 9 13:00:54.862279 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 9 13:00:54.862426 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 9 13:00:54.862550 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 9 13:00:54.862682 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 9 13:00:54.862830 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 9 13:00:54.862960 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 9 13:00:54.863095 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 9 13:00:54.863261 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 9 13:00:54.863420 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 9 13:00:54.863560 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 9 13:00:54.863707 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 9 13:00:54.863854 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 9 13:00:54.863983 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 9 13:00:54.865347 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 9 13:00:54.865559 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 9 13:00:54.865698 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 9 13:00:54.865710 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 9 13:00:54.865718 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 9 13:00:54.865733 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 13:00:54.865742 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 9 13:00:54.865750 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 9 13:00:54.865758 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 9 13:00:54.865766 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 9 13:00:54.865774 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 9 13:00:54.865782 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 9 13:00:54.865790 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 9 13:00:54.865798 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 9 13:00:54.865809 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 9 13:00:54.865817 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 9 13:00:54.865825 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 9 13:00:54.865833 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 9 13:00:54.865841 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 9 13:00:54.865849 kernel: iommu: Default domain type: Translated Jul 9 13:00:54.865858 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 13:00:54.865866 kernel: PCI: Using ACPI for IRQ routing Jul 9 13:00:54.865874 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 13:00:54.865885 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 9 13:00:54.865893 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 9 13:00:54.866018 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 9 13:00:54.866141 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 9 13:00:54.866264 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 13:00:54.866276 kernel: vgaarb: loaded Jul 9 13:00:54.866284 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 9 13:00:54.866292 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 9 13:00:54.866305 kernel: clocksource: Switched to clocksource kvm-clock Jul 9 13:00:54.866313 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 13:00:54.866321 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 13:00:54.866330 kernel: pnp: PnP ACPI init Jul 9 13:00:54.866491 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 9 13:00:54.866505 kernel: pnp: PnP ACPI: found 6 devices Jul 9 13:00:54.866513 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 13:00:54.866521 kernel: NET: Registered PF_INET protocol family Jul 9 13:00:54.866534 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 13:00:54.866542 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 13:00:54.866550 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 13:00:54.866559 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 13:00:54.866574 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 13:00:54.866583 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 13:00:54.866591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 13:00:54.866599 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 13:00:54.866608 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 13:00:54.866618 kernel: NET: Registered PF_XDP protocol family Jul 9 13:00:54.866744 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 9 13:00:54.866856 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 9 13:00:54.866966 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 9 13:00:54.867076 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 9 13:00:54.867185 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 9 13:00:54.867294 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 9 13:00:54.867305 kernel: PCI: CLS 0 bytes, default 64 Jul 9 13:00:54.867318 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 9 13:00:54.867326 kernel: Initialise system trusted keyrings Jul 9 13:00:54.867334 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 13:00:54.867342 kernel: Key type asymmetric registered Jul 9 13:00:54.867371 kernel: Asymmetric key parser 'x509' registered Jul 9 13:00:54.867389 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 9 13:00:54.867408 kernel: io scheduler mq-deadline registered Jul 9 13:00:54.867416 kernel: io scheduler kyber registered Jul 9 13:00:54.867425 kernel: io scheduler bfq registered Jul 9 13:00:54.867433 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 13:00:54.867446 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 9 13:00:54.867454 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 9 13:00:54.867462 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 9 13:00:54.867470 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 13:00:54.867478 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 13:00:54.867487 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 9 13:00:54.867495 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 13:00:54.867503 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 13:00:54.867671 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 9 13:00:54.867792 kernel: rtc_cmos 00:04: registered as rtc0 Jul 9 13:00:54.867905 kernel: rtc_cmos 00:04: setting system clock to 2025-07-09T13:00:54 UTC (1752066054) Jul 9 13:00:54.868018 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 9 13:00:54.868028 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 9 13:00:54.868037 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 9 13:00:54.868045 kernel: NET: Registered PF_INET6 protocol family Jul 9 13:00:54.868053 kernel: Segment Routing with IPv6 Jul 9 13:00:54.868065 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 13:00:54.868074 kernel: NET: Registered PF_PACKET protocol family Jul 9 13:00:54.868082 kernel: Key type dns_resolver registered Jul 9 13:00:54.868090 kernel: IPI shorthand broadcast: enabled Jul 9 13:00:54.868098 kernel: sched_clock: Marking stable (3243003447, 109224029)->(3375170992, -22943516) Jul 9 13:00:54.868106 kernel: registered taskstats version 1 Jul 9 13:00:54.868114 kernel: Loading compiled-in X.509 certificates Jul 9 13:00:54.868122 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 8ba3d283fde4a005aa35ab9394afe8122b8a3878' Jul 9 13:00:54.868130 kernel: Demotion targets for Node 0: null Jul 9 13:00:54.868141 kernel: Key type .fscrypt registered Jul 9 13:00:54.868149 kernel: Key type fscrypt-provisioning registered Jul 9 13:00:54.868157 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 13:00:54.868165 kernel: ima: Allocated hash algorithm: sha1 Jul 9 13:00:54.868173 kernel: ima: No architecture policies found Jul 9 13:00:54.868181 kernel: clk: Disabling unused clocks Jul 9 13:00:54.868189 kernel: Warning: unable to open an initial console. Jul 9 13:00:54.868198 kernel: Freeing unused kernel image (initmem) memory: 54568K Jul 9 13:00:54.868206 kernel: Write protecting the kernel read-only data: 24576k Jul 9 13:00:54.868217 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 9 13:00:54.868225 kernel: Run /init as init process Jul 9 13:00:54.868233 kernel: with arguments: Jul 9 13:00:54.868241 kernel: /init Jul 9 13:00:54.868249 kernel: with environment: Jul 9 13:00:54.868257 kernel: HOME=/ Jul 9 13:00:54.868265 kernel: TERM=linux Jul 9 13:00:54.868273 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 13:00:54.868282 systemd[1]: Successfully made /usr/ read-only. Jul 9 13:00:54.868297 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 13:00:54.868322 systemd[1]: Detected virtualization kvm. Jul 9 13:00:54.868330 systemd[1]: Detected architecture x86-64. Jul 9 13:00:54.868339 systemd[1]: Running in initrd. Jul 9 13:00:54.868347 systemd[1]: No hostname configured, using default hostname. Jul 9 13:00:54.868375 systemd[1]: Hostname set to . Jul 9 13:00:54.868383 systemd[1]: Initializing machine ID from VM UUID. Jul 9 13:00:54.868392 systemd[1]: Queued start job for default target initrd.target. Jul 9 13:00:54.868401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:00:54.868410 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:00:54.868419 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 13:00:54.868428 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 13:00:54.868437 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 13:00:54.868449 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 13:00:54.868459 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 13:00:54.868468 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 13:00:54.868477 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:00:54.868486 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:00:54.868495 systemd[1]: Reached target paths.target - Path Units. Jul 9 13:00:54.868504 systemd[1]: Reached target slices.target - Slice Units. Jul 9 13:00:54.868515 systemd[1]: Reached target swap.target - Swaps. Jul 9 13:00:54.868523 systemd[1]: Reached target timers.target - Timer Units. Jul 9 13:00:54.868532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 13:00:54.868541 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 13:00:54.868550 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 13:00:54.868559 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 13:00:54.868574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:00:54.868583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 13:00:54.868592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:00:54.868603 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 13:00:54.868612 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 13:00:54.868621 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 13:00:54.868629 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 13:00:54.868639 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 13:00:54.868652 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 13:00:54.868661 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 13:00:54.868670 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 13:00:54.868679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:00:54.868688 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 13:00:54.868697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:00:54.868708 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 13:00:54.868717 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 13:00:54.868750 systemd-journald[220]: Collecting audit messages is disabled. Jul 9 13:00:54.868776 systemd-journald[220]: Journal started Jul 9 13:00:54.868796 systemd-journald[220]: Runtime Journal (/run/log/journal/63c9a145d78248b0a76c48409d357983) is 6M, max 48.6M, 42.5M free. Jul 9 13:00:54.861996 systemd-modules-load[222]: Inserted module 'overlay' Jul 9 13:00:54.900891 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 13:00:54.900910 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 13:00:54.900928 kernel: Bridge firewalling registered Jul 9 13:00:54.889346 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 9 13:00:54.900808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 13:00:54.901181 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 13:00:54.905305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:00:54.909964 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 13:00:54.912598 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 13:00:54.924960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 13:00:54.925715 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 13:00:54.936080 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 13:00:54.937574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:00:54.939849 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:00:54.941087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:00:54.943968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 13:00:54.965493 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 13:00:54.967645 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 13:00:54.991631 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:00:55.001720 systemd-resolved[259]: Positive Trust Anchors: Jul 9 13:00:55.001955 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 13:00:55.001986 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 13:00:55.004703 systemd-resolved[259]: Defaulting to hostname 'linux'. Jul 9 13:00:55.010512 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 13:00:55.012531 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:00:55.106390 kernel: SCSI subsystem initialized Jul 9 13:00:55.115383 kernel: Loading iSCSI transport class v2.0-870. Jul 9 13:00:55.125385 kernel: iscsi: registered transport (tcp) Jul 9 13:00:55.148647 kernel: iscsi: registered transport (qla4xxx) Jul 9 13:00:55.148693 kernel: QLogic iSCSI HBA Driver Jul 9 13:00:55.170496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 13:00:55.189109 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:00:55.192714 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 13:00:55.300235 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 13:00:55.303761 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 13:00:55.358387 kernel: raid6: avx2x4 gen() 30445 MB/s Jul 9 13:00:55.375376 kernel: raid6: avx2x2 gen() 31130 MB/s Jul 9 13:00:55.392434 kernel: raid6: avx2x1 gen() 25926 MB/s Jul 9 13:00:55.392456 kernel: raid6: using algorithm avx2x2 gen() 31130 MB/s Jul 9 13:00:55.410430 kernel: raid6: .... xor() 19932 MB/s, rmw enabled Jul 9 13:00:55.410460 kernel: raid6: using avx2x2 recovery algorithm Jul 9 13:00:55.430385 kernel: xor: automatically using best checksumming function avx Jul 9 13:00:55.600388 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 13:00:55.609242 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 13:00:55.613014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:00:55.655786 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 9 13:00:55.661484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:00:55.664667 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 13:00:55.692326 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 9 13:00:55.722605 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 13:00:55.725217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 13:00:55.799716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:00:55.801318 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 13:00:55.853374 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 9 13:00:55.861738 kernel: cryptd: max_cpu_qlen set to 1000 Jul 9 13:00:55.868309 kernel: libata version 3.00 loaded. Jul 9 13:00:55.868335 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 13:00:55.876287 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 13:00:55.876323 kernel: GPT:9289727 != 19775487 Jul 9 13:00:55.876335 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 13:00:55.876405 kernel: GPT:9289727 != 19775487 Jul 9 13:00:55.876417 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 13:00:55.876452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 13:00:55.876463 kernel: ahci 0000:00:1f.2: version 3.0 Jul 9 13:00:55.882681 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 9 13:00:55.882700 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 9 13:00:55.882856 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 9 13:00:55.883270 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 9 13:00:55.883432 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 9 13:00:55.874277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 13:00:55.888848 kernel: AES CTR mode by8 optimization enabled Jul 9 13:00:55.874419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:00:55.883324 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:00:55.892825 kernel: scsi host0: ahci Jul 9 13:00:55.890083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:00:55.895991 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 13:00:55.898041 kernel: scsi host1: ahci Jul 9 13:00:55.902377 kernel: scsi host2: ahci Jul 9 13:00:55.905814 kernel: scsi host3: ahci Jul 9 13:00:55.910406 kernel: scsi host4: ahci Jul 9 13:00:55.912383 kernel: scsi host5: ahci Jul 9 13:00:55.927424 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 9 13:00:55.927452 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 9 13:00:55.927464 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 9 13:00:55.927475 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 9 13:00:55.927491 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 9 13:00:55.927502 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 9 13:00:55.939412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 13:00:55.969306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:00:55.979329 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 13:00:55.987413 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 13:00:55.989029 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 13:00:56.000375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 13:00:56.002401 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 13:00:56.026253 disk-uuid[630]: Primary Header is updated. Jul 9 13:00:56.026253 disk-uuid[630]: Secondary Entries is updated. Jul 9 13:00:56.026253 disk-uuid[630]: Secondary Header is updated. Jul 9 13:00:56.029480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 13:00:56.034378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 13:00:56.237384 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 9 13:00:56.237424 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 9 13:00:56.238383 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 9 13:00:56.239385 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 9 13:00:56.239421 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 9 13:00:56.240380 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 9 13:00:56.241381 kernel: ata3.00: applying bridge limits Jul 9 13:00:56.241395 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 9 13:00:56.242386 kernel: ata3.00: configured for UDMA/100 Jul 9 13:00:56.244386 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 9 13:00:56.284886 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 9 13:00:56.285098 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 13:00:56.303386 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 9 13:00:56.651901 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 13:00:56.653579 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 13:00:56.655193 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:00:56.656329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 13:00:56.659343 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 13:00:56.688762 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 13:00:57.035329 disk-uuid[631]: The operation has completed successfully. Jul 9 13:00:57.036592 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 13:00:57.068646 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 13:00:57.068771 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 13:00:57.098162 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 13:00:57.127110 sh[660]: Success Jul 9 13:00:57.144845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 13:00:57.144880 kernel: device-mapper: uevent: version 1.0.3 Jul 9 13:00:57.145881 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 13:00:57.154406 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 9 13:00:57.186673 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 13:00:57.189650 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 13:00:57.202602 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 13:00:57.210182 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 13:00:57.210211 kernel: BTRFS: device fsid 082bcfbc-2c86-46fe-87f4-85dea5450235 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (672) Jul 9 13:00:57.212471 kernel: BTRFS info (device dm-0): first mount of filesystem 082bcfbc-2c86-46fe-87f4-85dea5450235 Jul 9 13:00:57.212533 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:00:57.212569 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 13:00:57.218526 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 13:00:57.218988 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 13:00:57.220262 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 13:00:57.221088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 13:00:57.222848 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 13:00:57.253604 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (705) Jul 9 13:00:57.255658 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:00:57.255682 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:00:57.255698 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 13:00:57.262389 kernel: BTRFS info (device vda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:00:57.263693 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 13:00:57.266726 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 13:00:57.376785 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 13:00:57.379520 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 13:00:57.391791 ignition[748]: Ignition 2.21.0 Jul 9 13:00:57.391805 ignition[748]: Stage: fetch-offline Jul 9 13:00:57.391851 ignition[748]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:00:57.391863 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:00:57.391979 ignition[748]: parsed url from cmdline: "" Jul 9 13:00:57.391983 ignition[748]: no config URL provided Jul 9 13:00:57.391988 ignition[748]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 13:00:57.391998 ignition[748]: no config at "/usr/lib/ignition/user.ign" Jul 9 13:00:57.392021 ignition[748]: op(1): [started] loading QEMU firmware config module Jul 9 13:00:57.392026 ignition[748]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 13:00:57.402044 ignition[748]: op(1): [finished] loading QEMU firmware config module Jul 9 13:00:57.425405 systemd-networkd[847]: lo: Link UP Jul 9 13:00:57.425413 systemd-networkd[847]: lo: Gained carrier Jul 9 13:00:57.427393 systemd-networkd[847]: Enumeration completed Jul 9 13:00:57.427668 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 13:00:57.427868 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 13:00:57.427873 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 13:00:57.429580 systemd-networkd[847]: eth0: Link UP Jul 9 13:00:57.429585 systemd-networkd[847]: eth0: Gained carrier Jul 9 13:00:57.429593 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 13:00:57.434233 systemd[1]: Reached target network.target - Network. Jul 9 13:00:57.446400 systemd-networkd[847]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 13:00:57.453913 ignition[748]: parsing config with SHA512: 3e0f8278245aedbafd9596d87f7d38319dd6c57bb9ddf4c343b659ea753f17198a37d568640b1b01ff1a6de34b7a00608c70b6d618c0d1fe0d64740a609c2454 Jul 9 13:00:57.460289 unknown[748]: fetched base config from "system" Jul 9 13:00:57.460303 unknown[748]: fetched user config from "qemu" Jul 9 13:00:57.460656 ignition[748]: fetch-offline: fetch-offline passed Jul 9 13:00:57.460721 ignition[748]: Ignition finished successfully Jul 9 13:00:57.464057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 13:00:57.466381 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 13:00:57.468395 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 13:00:57.517426 ignition[855]: Ignition 2.21.0 Jul 9 13:00:57.517440 ignition[855]: Stage: kargs Jul 9 13:00:57.517585 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:00:57.517597 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:00:57.518266 ignition[855]: kargs: kargs passed Jul 9 13:00:57.518312 ignition[855]: Ignition finished successfully Jul 9 13:00:57.525932 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 13:00:57.526984 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 13:00:57.565110 ignition[863]: Ignition 2.21.0 Jul 9 13:00:57.565125 ignition[863]: Stage: disks Jul 9 13:00:57.565252 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:00:57.565263 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:00:57.565951 ignition[863]: disks: disks passed Jul 9 13:00:57.565994 ignition[863]: Ignition finished successfully Jul 9 13:00:57.569637 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 13:00:57.571078 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 13:00:57.572825 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 13:00:57.573028 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 13:00:57.573350 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 13:00:57.573839 systemd[1]: Reached target basic.target - Basic System. Jul 9 13:00:57.575139 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 13:00:57.606251 systemd-fsck[873]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 13:00:57.614209 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 13:00:57.616781 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 13:00:57.743385 kernel: EXT4-fs (vda9): mounted filesystem b08a603c-44fa-43af-af80-90bed9b8770a r/w with ordered data mode. Quota mode: none. Jul 9 13:00:57.744007 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 13:00:57.744711 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 13:00:57.748365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 13:00:57.750267 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 13:00:57.750594 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 13:00:57.750635 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 13:00:57.750657 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 13:00:57.775661 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 13:00:57.777109 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 13:00:57.784397 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (881) Jul 9 13:00:57.787019 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:00:57.787058 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:00:57.787069 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 13:00:57.792972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 13:00:57.827283 initrd-setup-root[905]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 13:00:57.832002 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Jul 9 13:00:57.837563 initrd-setup-root[919]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 13:00:57.841253 initrd-setup-root[926]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 13:00:57.936867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 13:00:57.940915 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 13:00:57.942713 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 13:00:57.965426 kernel: BTRFS info (device vda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:00:57.978497 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 13:00:57.996970 ignition[995]: INFO : Ignition 2.21.0 Jul 9 13:00:57.996970 ignition[995]: INFO : Stage: mount Jul 9 13:00:57.998657 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:00:57.998657 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:00:58.001480 ignition[995]: INFO : mount: mount passed Jul 9 13:00:58.002219 ignition[995]: INFO : Ignition finished successfully Jul 9 13:00:58.005278 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 13:00:58.009132 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 13:00:58.209687 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 13:00:58.211434 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 13:00:58.237386 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1008) Jul 9 13:00:58.240757 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:00:58.240787 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:00:58.240803 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 13:00:58.244953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 13:00:58.280520 ignition[1025]: INFO : Ignition 2.21.0 Jul 9 13:00:58.280520 ignition[1025]: INFO : Stage: files Jul 9 13:00:58.282744 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:00:58.282744 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:00:58.285022 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping Jul 9 13:00:58.285022 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 13:00:58.285022 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 13:00:58.289185 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 13:00:58.289185 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 13:00:58.289185 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 13:00:58.287265 unknown[1025]: wrote ssh authorized keys file for user: core Jul 9 13:00:58.294244 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 9 13:00:58.294244 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 9 13:00:58.344339 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 13:00:58.702775 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 9 13:00:58.705200 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 9 13:00:58.705200 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 13:00:58.705200 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 13:00:58.705200 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 13:00:58.705200 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 13:00:58.705200 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 13:00:58.705200 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 13:00:58.705200 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 13:00:58.719520 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 13:00:58.719520 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 13:00:58.719520 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 13:00:58.719520 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 13:00:58.719520 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 13:00:58.719520 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 9 13:00:58.878512 systemd-networkd[847]: eth0: Gained IPv6LL Jul 9 13:00:59.394970 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 9 13:00:59.783366 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 13:00:59.785879 ignition[1025]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 9 13:00:59.787691 ignition[1025]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 13:00:59.792793 ignition[1025]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 13:00:59.792793 ignition[1025]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 9 13:00:59.792793 ignition[1025]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 9 13:00:59.797339 ignition[1025]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 13:00:59.799230 ignition[1025]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 13:00:59.799230 ignition[1025]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 9 13:00:59.799230 ignition[1025]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 13:00:59.819950 ignition[1025]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 13:00:59.825312 ignition[1025]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 13:00:59.827055 ignition[1025]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 13:00:59.827055 ignition[1025]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 9 13:00:59.827055 ignition[1025]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 13:00:59.827055 ignition[1025]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 13:00:59.827055 ignition[1025]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 13:00:59.827055 ignition[1025]: INFO : files: files passed Jul 9 13:00:59.827055 ignition[1025]: INFO : Ignition finished successfully Jul 9 13:00:59.834184 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 13:00:59.837527 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 13:00:59.840641 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 13:00:59.860134 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 13:00:59.860284 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 13:00:59.863246 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 13:00:59.866460 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:00:59.866460 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:00:59.870602 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:00:59.872650 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 13:00:59.875209 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 13:00:59.878254 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 13:00:59.958957 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 13:00:59.960107 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 13:00:59.962893 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 13:00:59.965010 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 13:00:59.967455 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 13:00:59.969986 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 13:01:00.003894 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 13:01:00.006417 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 13:01:00.039024 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:01:00.039286 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:01:00.042674 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 13:01:00.043875 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 13:01:00.044040 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 13:01:00.047791 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 13:01:00.048867 systemd[1]: Stopped target basic.target - Basic System. Jul 9 13:01:00.049178 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 13:01:00.049684 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 13:01:00.050034 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 13:01:00.050392 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 13:01:00.051875 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 13:01:00.060007 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 13:01:00.060324 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 13:01:00.060839 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 13:01:00.067913 systemd[1]: Stopped target swap.target - Swaps. Jul 9 13:01:00.069909 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 13:01:00.070040 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 13:01:00.071116 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:01:00.071474 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:01:00.071896 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 13:01:00.072008 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:01:00.077635 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 13:01:00.077937 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 13:01:00.083898 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 13:01:00.084040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 13:01:00.085124 systemd[1]: Stopped target paths.target - Path Units. Jul 9 13:01:00.085390 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 13:01:00.088668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:01:00.092536 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 13:01:00.092699 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 13:01:00.095594 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 13:01:00.095707 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 13:01:00.096639 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 13:01:00.096729 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 13:01:00.099322 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 13:01:00.099500 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 13:01:00.100346 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 13:01:00.100609 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 13:01:00.105434 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 13:01:00.107310 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 13:01:00.110074 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 13:01:00.110334 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:01:00.112265 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 13:01:00.112408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 13:01:00.121602 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 13:01:00.127623 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 13:01:00.151843 ignition[1080]: INFO : Ignition 2.21.0 Jul 9 13:01:00.155554 ignition[1080]: INFO : Stage: umount Jul 9 13:01:00.155554 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:01:00.155554 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:01:00.155554 ignition[1080]: INFO : umount: umount passed Jul 9 13:01:00.155554 ignition[1080]: INFO : Ignition finished successfully Jul 9 13:01:00.153798 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 13:01:00.158853 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 13:01:00.158985 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 13:01:00.160896 systemd[1]: Stopped target network.target - Network. Jul 9 13:01:00.163741 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 13:01:00.163826 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 13:01:00.169397 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 13:01:00.169467 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 13:01:00.172514 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 13:01:00.172600 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 13:01:00.174681 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 13:01:00.174731 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 13:01:00.175789 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 13:01:00.178666 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 13:01:00.186648 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 13:01:00.186844 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 13:01:00.191828 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 13:01:00.192062 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 13:01:00.192214 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 13:01:00.198700 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 13:01:00.200437 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 13:01:00.203501 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 13:01:00.203586 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:01:00.206484 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 13:01:00.206757 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 13:01:00.206825 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 13:01:00.207375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 13:01:00.207480 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:01:00.213380 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 13:01:00.213863 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 13:01:00.214034 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 13:01:00.214088 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:01:00.218434 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:01:00.222654 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 13:01:00.222745 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 13:01:00.242513 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 13:01:00.257740 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:01:00.260830 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 13:01:00.260959 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 13:01:00.262566 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 13:01:00.262648 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 13:01:00.263786 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 13:01:00.263827 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:01:00.264073 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 13:01:00.264130 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 13:01:00.265041 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 13:01:00.265097 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 13:01:00.271851 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 13:01:00.271960 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 13:01:00.274683 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 13:01:00.276640 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 13:01:00.276704 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:01:00.281016 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 13:01:00.281082 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:01:00.285603 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 13:01:00.285689 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 13:01:00.288926 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 13:01:00.288994 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:01:00.290490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 13:01:00.290543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:01:00.296014 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 13:01:00.296081 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 13:01:00.296124 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 13:01:00.296170 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 13:01:00.296673 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 13:01:00.296791 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 13:01:00.315560 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 13:01:00.315715 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 13:01:00.316897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 13:01:00.318379 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 13:01:00.318461 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 13:01:00.321298 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 13:01:00.345458 systemd[1]: Switching root. Jul 9 13:01:00.382082 systemd-journald[220]: Journal stopped Jul 9 13:01:01.515799 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 9 13:01:01.515873 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 13:01:01.515888 kernel: SELinux: policy capability open_perms=1 Jul 9 13:01:01.515904 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 13:01:01.515915 kernel: SELinux: policy capability always_check_network=0 Jul 9 13:01:01.515933 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 13:01:01.515944 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 13:01:01.515959 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 13:01:01.515972 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 13:01:01.515984 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 13:01:01.516001 kernel: audit: type=1403 audit(1752066060.730:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 13:01:01.516020 systemd[1]: Successfully loaded SELinux policy in 60.728ms. Jul 9 13:01:01.516049 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.371ms. Jul 9 13:01:01.516063 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 13:01:01.516076 systemd[1]: Detected virtualization kvm. Jul 9 13:01:01.516095 systemd[1]: Detected architecture x86-64. Jul 9 13:01:01.516110 systemd[1]: Detected first boot. Jul 9 13:01:01.516122 systemd[1]: Initializing machine ID from VM UUID. Jul 9 13:01:01.516141 zram_generator::config[1128]: No configuration found. Jul 9 13:01:01.516154 kernel: Guest personality initialized and is inactive Jul 9 13:01:01.516166 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 13:01:01.516177 kernel: Initialized host personality Jul 9 13:01:01.516188 kernel: NET: Registered PF_VSOCK protocol family Jul 9 13:01:01.516200 systemd[1]: Populated /etc with preset unit settings. Jul 9 13:01:01.516217 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 13:01:01.516230 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 13:01:01.516243 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 13:01:01.516255 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 13:01:01.516268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 13:01:01.516280 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 13:01:01.516293 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 13:01:01.516305 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 13:01:01.516320 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 13:01:01.516332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 13:01:01.516345 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 13:01:01.516435 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 13:01:01.516448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:01:01.516460 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:01:01.516473 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 13:01:01.516485 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 13:01:01.516497 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 13:01:01.516513 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 13:01:01.516525 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 13:01:01.516538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:01:01.516550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:01:01.516562 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 13:01:01.516574 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 13:01:01.516586 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 13:01:01.516598 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 13:01:01.516615 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:01:01.516629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 13:01:01.516643 systemd[1]: Reached target slices.target - Slice Units. Jul 9 13:01:01.516656 systemd[1]: Reached target swap.target - Swaps. Jul 9 13:01:01.516668 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 13:01:01.516680 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 13:01:01.516694 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 13:01:01.516706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:01:01.516718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 13:01:01.516732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:01:01.516744 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 13:01:01.516756 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 13:01:01.516768 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 13:01:01.516780 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 13:01:01.516793 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:01:01.516806 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 13:01:01.516818 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 13:01:01.516836 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 13:01:01.516852 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 13:01:01.516864 systemd[1]: Reached target machines.target - Containers. Jul 9 13:01:01.516876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 13:01:01.516888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 13:01:01.516900 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 13:01:01.516912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 13:01:01.516924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:01:01.516936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 13:01:01.516955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:01:01.516968 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 13:01:01.516980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:01:01.516992 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 13:01:01.517005 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 13:01:01.517017 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 13:01:01.517029 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 13:01:01.517041 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 13:01:01.517055 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:01:01.517069 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 13:01:01.517084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 13:01:01.517100 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 13:01:01.517116 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 13:01:01.517128 kernel: ACPI: bus type drm_connector registered Jul 9 13:01:01.517140 kernel: fuse: init (API version 7.41) Jul 9 13:01:01.517152 kernel: loop: module loaded Jul 9 13:01:01.517163 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 13:01:01.517175 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 13:01:01.517190 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 13:01:01.517202 systemd[1]: Stopped verity-setup.service. Jul 9 13:01:01.517215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:01:01.517254 systemd-journald[1203]: Collecting audit messages is disabled. Jul 9 13:01:01.517279 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 13:01:01.517292 systemd-journald[1203]: Journal started Jul 9 13:01:01.517313 systemd-journald[1203]: Runtime Journal (/run/log/journal/63c9a145d78248b0a76c48409d357983) is 6M, max 48.6M, 42.5M free. Jul 9 13:01:01.532524 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 13:01:01.532589 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 13:01:01.532606 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 13:01:01.532620 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 13:01:01.532635 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 13:01:01.283047 systemd[1]: Queued start job for default target multi-user.target. Jul 9 13:01:01.302514 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 13:01:01.303038 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 13:01:01.534711 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 13:01:01.536775 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 13:01:01.538532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:01:01.540471 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 13:01:01.540761 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 13:01:01.542295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:01:01.542639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:01:01.544186 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 13:01:01.544472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 13:01:01.545901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:01:01.546151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:01:01.547775 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 13:01:01.548007 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 13:01:01.549529 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:01:01.549779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:01:01.551332 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 13:01:01.552927 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:01:01.554541 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 13:01:01.556166 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 13:01:01.572049 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 13:01:01.574660 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 13:01:01.576814 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 13:01:01.577942 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 13:01:01.577973 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 13:01:01.579924 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 13:01:01.591450 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 13:01:01.593804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:01:01.595275 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 13:01:01.597538 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 13:01:01.600168 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 13:01:01.602881 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 13:01:01.604018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 13:01:01.608586 systemd-journald[1203]: Time spent on flushing to /var/log/journal/63c9a145d78248b0a76c48409d357983 is 21.643ms for 977 entries. Jul 9 13:01:01.608586 systemd-journald[1203]: System Journal (/var/log/journal/63c9a145d78248b0a76c48409d357983) is 8M, max 195.6M, 187.6M free. Jul 9 13:01:01.659812 systemd-journald[1203]: Received client request to flush runtime journal. Jul 9 13:01:01.659863 kernel: loop0: detected capacity change from 0 to 114008 Jul 9 13:01:01.606486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 13:01:01.610882 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 13:01:01.613551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 13:01:01.617402 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 13:01:01.620529 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 13:01:01.631412 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:01:01.660250 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 13:01:01.662010 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 13:01:01.665730 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 13:01:01.668523 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 13:01:01.671577 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 9 13:01:01.671886 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 9 13:01:01.707335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 13:01:01.712642 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:01:01.715513 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 13:01:01.744652 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 13:01:01.754401 kernel: loop1: detected capacity change from 0 to 221472 Jul 9 13:01:01.764820 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 13:01:01.821510 kernel: loop2: detected capacity change from 0 to 146480 Jul 9 13:01:01.864421 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 13:01:01.871586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 13:01:01.893895 kernel: loop3: detected capacity change from 0 to 114008 Jul 9 13:01:01.932386 kernel: loop4: detected capacity change from 0 to 221472 Jul 9 13:01:01.932645 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 9 13:01:01.932974 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 9 13:01:01.938902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:01:01.948396 kernel: loop5: detected capacity change from 0 to 146480 Jul 9 13:01:01.985987 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 13:01:01.987459 (sd-merge)[1270]: Merged extensions into '/usr'. Jul 9 13:01:01.992204 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 13:01:01.992223 systemd[1]: Reloading... Jul 9 13:01:02.088268 zram_generator::config[1297]: No configuration found. Jul 9 13:01:02.243307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:01:02.257191 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 13:01:02.334344 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 13:01:02.334553 systemd[1]: Reloading finished in 341 ms. Jul 9 13:01:02.365686 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 13:01:02.367412 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 13:01:02.392466 systemd[1]: Starting ensure-sysext.service... Jul 9 13:01:02.400778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 13:01:02.426246 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Jul 9 13:01:02.426263 systemd[1]: Reloading... Jul 9 13:01:02.433644 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 13:01:02.434041 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 13:01:02.434384 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 13:01:02.434764 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 13:01:02.435712 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 13:01:02.435995 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 9 13:01:02.436073 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 9 13:01:02.440419 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 13:01:02.440433 systemd-tmpfiles[1337]: Skipping /boot Jul 9 13:01:02.452286 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 13:01:02.452402 systemd-tmpfiles[1337]: Skipping /boot Jul 9 13:01:02.485422 zram_generator::config[1361]: No configuration found. Jul 9 13:01:02.598998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:01:02.695056 systemd[1]: Reloading finished in 268 ms. Jul 9 13:01:02.719220 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 13:01:02.750257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:01:02.760042 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 13:01:02.762822 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 13:01:02.765757 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 13:01:02.774738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 13:01:02.778430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:01:02.782740 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 13:01:02.787188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:01:02.787400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 13:01:02.794433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:01:02.798905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:01:02.800135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:01:02.802766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:01:02.802870 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:01:02.810270 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 13:01:02.811577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:01:02.813379 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 13:01:02.818234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:01:02.818604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:01:02.820852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:01:02.821106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:01:02.823384 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:01:02.823691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:01:02.830121 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Jul 9 13:01:02.834763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:01:02.835000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 13:01:02.838596 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:01:02.841682 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:01:02.846634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:01:02.847946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:01:02.848095 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:01:02.850722 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 13:01:02.851949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:01:02.853520 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 13:01:02.856294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:01:02.860697 augenrules[1440]: No rules Jul 9 13:01:02.870239 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 13:01:02.873612 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 13:01:02.875470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:01:02.875693 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:01:02.877915 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:01:02.878127 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:01:02.880030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:01:02.880240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:01:02.888631 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 13:01:02.890494 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 13:01:02.890778 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 13:01:02.900861 systemd[1]: Finished ensure-sysext.service. Jul 9 13:01:02.906502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:01:02.906732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 13:01:02.909443 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 13:01:02.910637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:01:02.910670 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:01:02.912714 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 13:01:02.913846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 13:01:02.913910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 13:01:02.916712 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 13:01:02.917847 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 13:01:02.917871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:01:02.930609 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 13:01:02.931517 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 13:01:02.944549 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 13:01:03.027396 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 13:01:03.052710 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 13:01:03.055493 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 13:01:03.062393 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 9 13:01:03.072412 kernel: ACPI: button: Power Button [PWRF] Jul 9 13:01:03.078520 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 13:01:03.101155 systemd-resolved[1406]: Positive Trust Anchors: Jul 9 13:01:03.101175 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 13:01:03.101214 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 13:01:03.113755 systemd-resolved[1406]: Defaulting to hostname 'linux'. Jul 9 13:01:03.117680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 13:01:03.119186 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:01:03.126069 systemd-networkd[1485]: lo: Link UP Jul 9 13:01:03.126552 systemd-networkd[1485]: lo: Gained carrier Jul 9 13:01:03.133686 systemd-networkd[1485]: Enumeration completed Jul 9 13:01:03.133774 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 13:01:03.135036 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 13:01:03.135041 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 13:01:03.135502 systemd[1]: Reached target network.target - Network. Jul 9 13:01:03.138480 systemd-networkd[1485]: eth0: Link UP Jul 9 13:01:03.139448 systemd-networkd[1485]: eth0: Gained carrier Jul 9 13:01:03.139597 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 13:01:03.144077 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 13:01:03.147296 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 13:01:03.157586 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 13:01:03.159417 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 13:01:03.160690 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 13:01:03.162064 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 13:01:03.163437 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 9 13:01:03.164704 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 13:01:03.166433 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 13:01:03.166462 systemd[1]: Reached target paths.target - Path Units. Jul 9 13:01:03.167513 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 13:01:03.170631 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 13:01:03.172105 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 13:01:03.174430 systemd[1]: Reached target timers.target - Timer Units. Jul 9 13:01:03.176712 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 13:01:03.180140 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 13:01:03.187050 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 13:01:03.188885 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 13:01:03.190422 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 13:01:03.195661 systemd-networkd[1485]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 13:01:03.197204 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 13:01:03.198836 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Jul 9 13:01:03.199233 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 13:01:03.930940 systemd-resolved[1406]: Clock change detected. Flushing caches. Jul 9 13:01:03.931044 systemd-timesyncd[1487]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 13:01:03.931100 systemd-timesyncd[1487]: Initial clock synchronization to Wed 2025-07-09 13:01:03.930868 UTC. Jul 9 13:01:03.932705 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 13:01:03.934708 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 13:01:03.938856 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 13:01:03.941397 systemd[1]: Reached target basic.target - Basic System. Jul 9 13:01:03.942393 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 13:01:03.942489 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 13:01:03.945617 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 13:01:03.947994 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 13:01:03.951591 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 13:01:03.955882 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 13:01:03.960627 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 13:01:03.962456 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 13:01:03.963740 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 9 13:01:03.968578 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 9 13:01:03.969350 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 9 13:01:03.971618 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 13:01:03.974132 jq[1527]: false Jul 9 13:01:03.975245 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 13:01:03.981737 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 13:01:03.985654 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 13:01:03.990505 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing passwd entry cache Jul 9 13:01:03.988923 oslogin_cache_refresh[1529]: Refreshing passwd entry cache Jul 9 13:01:03.991674 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 13:01:03.996310 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 13:01:03.996823 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 13:01:03.999203 oslogin_cache_refresh[1529]: Failure getting users, quitting Jul 9 13:01:04.000799 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting users, quitting Jul 9 13:01:04.000799 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 13:01:04.000799 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing group entry cache Jul 9 13:01:03.997930 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 13:01:03.999252 oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 13:01:03.999395 oslogin_cache_refresh[1529]: Refreshing group entry cache Jul 9 13:01:04.001444 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 13:01:04.006416 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 13:01:04.009419 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 13:01:04.021424 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting groups, quitting Jul 9 13:01:04.021424 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 13:01:04.015697 oslogin_cache_refresh[1529]: Failure getting groups, quitting Jul 9 13:01:04.016657 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 13:01:04.015714 oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 13:01:04.017845 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 9 13:01:04.019453 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 9 13:01:04.026534 extend-filesystems[1528]: Found /dev/vda6 Jul 9 13:01:04.039260 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 13:01:04.040069 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 13:01:04.043930 jq[1540]: true Jul 9 13:01:04.048696 extend-filesystems[1528]: Found /dev/vda9 Jul 9 13:01:04.057530 kernel: kvm_amd: TSC scaling supported Jul 9 13:01:04.057565 kernel: kvm_amd: Nested Virtualization enabled Jul 9 13:01:04.057579 kernel: kvm_amd: Nested Paging enabled Jul 9 13:01:04.057591 kernel: kvm_amd: LBR virtualization supported Jul 9 13:01:04.059688 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 9 13:01:04.060135 kernel: kvm_amd: Virtual GIF supported Jul 9 13:01:04.060436 tar[1542]: linux-amd64/helm Jul 9 13:01:04.074672 extend-filesystems[1528]: Checking size of /dev/vda9 Jul 9 13:01:04.076497 update_engine[1539]: I20250709 13:01:04.072341 1539 main.cc:92] Flatcar Update Engine starting Jul 9 13:01:04.070978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:01:04.085212 jq[1557]: true Jul 9 13:01:04.105359 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 13:01:04.105873 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 13:01:04.132651 dbus-daemon[1525]: [system] SELinux support is enabled Jul 9 13:01:04.132869 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 13:01:04.140262 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 13:01:04.140422 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 13:01:04.142145 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 13:01:04.142252 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 13:01:04.143519 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 13:01:04.154737 systemd[1]: Started update-engine.service - Update Engine. Jul 9 13:01:04.161113 update_engine[1539]: I20250709 13:01:04.161057 1539 update_check_scheduler.cc:74] Next update check in 5m37s Jul 9 13:01:04.175662 extend-filesystems[1528]: Resized partition /dev/vda9 Jul 9 13:01:04.179722 extend-filesystems[1583]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 13:01:04.181049 systemd-logind[1538]: Watching system buttons on /dev/input/event2 (Power Button) Jul 9 13:01:04.181730 systemd-logind[1538]: New seat seat0. Jul 9 13:01:04.188405 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 13:01:04.189139 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 13:01:04.190940 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 13:01:04.251632 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 13:01:04.280530 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 13:01:04.293684 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 13:01:04.305449 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 13:01:04.317705 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 13:01:04.317705 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 13:01:04.317705 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 13:01:04.467396 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Jul 9 13:01:04.343333 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 13:01:04.467776 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Jul 9 13:01:04.343647 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 13:01:04.473800 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 13:01:04.518795 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 13:01:04.525886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:01:04.549179 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 13:01:04.551880 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 13:01:04.606433 kernel: EDAC MC: Ver: 3.0.0 Jul 9 13:01:04.636140 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 13:01:04.636502 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 13:01:04.665400 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 13:01:04.743312 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 13:01:04.746937 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 13:01:04.749281 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 13:01:04.750778 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 13:01:04.945146 containerd[1563]: time="2025-07-09T13:01:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 13:01:04.947583 containerd[1563]: time="2025-07-09T13:01:04.947528480Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 13:01:04.958924 containerd[1563]: time="2025-07-09T13:01:04.958865078Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.102µs" Jul 9 13:01:04.958924 containerd[1563]: time="2025-07-09T13:01:04.958906115Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 13:01:04.958993 containerd[1563]: time="2025-07-09T13:01:04.958940019Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 13:01:04.959159 containerd[1563]: time="2025-07-09T13:01:04.959131518Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 13:01:04.959159 containerd[1563]: time="2025-07-09T13:01:04.959152447Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 13:01:04.959202 containerd[1563]: time="2025-07-09T13:01:04.959180750Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 13:01:04.959274 containerd[1563]: time="2025-07-09T13:01:04.959247986Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 13:01:04.959309 containerd[1563]: time="2025-07-09T13:01:04.959295315Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 13:01:04.959723 containerd[1563]: time="2025-07-09T13:01:04.959683303Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 13:01:04.959723 containerd[1563]: time="2025-07-09T13:01:04.959704844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 13:01:04.959723 containerd[1563]: time="2025-07-09T13:01:04.959716365Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 13:01:04.959723 containerd[1563]: time="2025-07-09T13:01:04.959724681Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 13:01:04.960047 containerd[1563]: time="2025-07-09T13:01:04.959999556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 13:01:04.960622 containerd[1563]: time="2025-07-09T13:01:04.960573643Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 13:01:04.960840 containerd[1563]: time="2025-07-09T13:01:04.960818974Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 13:01:04.960935 containerd[1563]: time="2025-07-09T13:01:04.960900236Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 13:01:04.961089 containerd[1563]: time="2025-07-09T13:01:04.961067580Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 13:01:04.961828 containerd[1563]: time="2025-07-09T13:01:04.961788222Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 13:01:04.961940 containerd[1563]: time="2025-07-09T13:01:04.961909850Z" level=info msg="metadata content store policy set" policy=shared Jul 9 13:01:05.033737 containerd[1563]: time="2025-07-09T13:01:05.033671831Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 13:01:05.033818 containerd[1563]: time="2025-07-09T13:01:05.033783450Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 13:01:05.033818 containerd[1563]: time="2025-07-09T13:01:05.033808557Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 13:01:05.033919 containerd[1563]: time="2025-07-09T13:01:05.033827323Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 13:01:05.033919 containerd[1563]: time="2025-07-09T13:01:05.033849985Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 13:01:05.033919 containerd[1563]: time="2025-07-09T13:01:05.033866336Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 13:01:05.033919 containerd[1563]: time="2025-07-09T13:01:05.033885852Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 13:01:05.034011 containerd[1563]: time="2025-07-09T13:01:05.033926549Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 13:01:05.034011 containerd[1563]: time="2025-07-09T13:01:05.033949131Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 13:01:05.034011 containerd[1563]: time="2025-07-09T13:01:05.033965011Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 13:01:05.034011 containerd[1563]: time="2025-07-09T13:01:05.033978727Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 13:01:05.034011 containerd[1563]: time="2025-07-09T13:01:05.033996360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 13:01:05.034273 containerd[1563]: time="2025-07-09T13:01:05.034248062Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 13:01:05.034314 containerd[1563]: time="2025-07-09T13:01:05.034294800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 13:01:05.034345 containerd[1563]: time="2025-07-09T13:01:05.034318915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 13:01:05.034365 containerd[1563]: time="2025-07-09T13:01:05.034346416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 13:01:05.034365 containerd[1563]: time="2025-07-09T13:01:05.034360553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 13:01:05.034429 containerd[1563]: time="2025-07-09T13:01:05.034399777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 13:01:05.034465 containerd[1563]: time="2025-07-09T13:01:05.034445913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 13:01:05.034493 containerd[1563]: time="2025-07-09T13:01:05.034466201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 13:01:05.034493 containerd[1563]: time="2025-07-09T13:01:05.034482592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 13:01:05.034536 containerd[1563]: time="2025-07-09T13:01:05.034495426Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 13:01:05.034536 containerd[1563]: time="2025-07-09T13:01:05.034510504Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 13:01:05.034650 containerd[1563]: time="2025-07-09T13:01:05.034627093Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 13:01:05.034673 containerd[1563]: time="2025-07-09T13:01:05.034652370Z" level=info msg="Start snapshots syncer" Jul 9 13:01:05.034710 containerd[1563]: time="2025-07-09T13:01:05.034693668Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 13:01:05.035102 containerd[1563]: time="2025-07-09T13:01:05.035046991Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 13:01:05.035245 containerd[1563]: time="2025-07-09T13:01:05.035182826Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 13:01:05.035295 containerd[1563]: time="2025-07-09T13:01:05.035276782Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 13:01:05.035466 containerd[1563]: time="2025-07-09T13:01:05.035443905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 13:01:05.035490 containerd[1563]: time="2025-07-09T13:01:05.035474603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 13:01:05.035522 containerd[1563]: time="2025-07-09T13:01:05.035503547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 13:01:05.035557 containerd[1563]: time="2025-07-09T13:01:05.035539605Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 13:01:05.035580 containerd[1563]: time="2025-07-09T13:01:05.035562267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 13:01:05.035580 containerd[1563]: time="2025-07-09T13:01:05.035575633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 13:01:05.035617 containerd[1563]: time="2025-07-09T13:01:05.035602222Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 13:01:05.035651 containerd[1563]: time="2025-07-09T13:01:05.035633641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 13:01:05.035673 containerd[1563]: time="2025-07-09T13:01:05.035653509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 13:01:05.035698 containerd[1563]: time="2025-07-09T13:01:05.035669388Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 13:01:05.035779 containerd[1563]: time="2025-07-09T13:01:05.035757424Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 13:01:05.035813 containerd[1563]: time="2025-07-09T13:01:05.035781218Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 13:01:05.035813 containerd[1563]: time="2025-07-09T13:01:05.035793331Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 13:01:05.035813 containerd[1563]: time="2025-07-09T13:01:05.035807638Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 13:01:05.035880 containerd[1563]: time="2025-07-09T13:01:05.035817707Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 13:01:05.035880 containerd[1563]: time="2025-07-09T13:01:05.035830140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 13:01:05.035880 containerd[1563]: time="2025-07-09T13:01:05.035850648Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 13:01:05.035880 containerd[1563]: time="2025-07-09T13:01:05.035874072Z" level=info msg="runtime interface created" Jul 9 13:01:05.035880 containerd[1563]: time="2025-07-09T13:01:05.035880675Z" level=info msg="created NRI interface" Jul 9 13:01:05.035981 containerd[1563]: time="2025-07-09T13:01:05.035892457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 13:01:05.035981 containerd[1563]: time="2025-07-09T13:01:05.035906724Z" level=info msg="Connect containerd service" Jul 9 13:01:05.035981 containerd[1563]: time="2025-07-09T13:01:05.035934406Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 13:01:05.039033 containerd[1563]: time="2025-07-09T13:01:05.038991301Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.192698556Z" level=info msg="Start subscribing containerd event" Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.192817540Z" level=info msg="Start recovering state" Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.192997467Z" level=info msg="Start event monitor" Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193011904Z" level=info msg="Start cni network conf syncer for default" Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193026552Z" level=info msg="Start streaming server" Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193037342Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193044666Z" level=info msg="runtime interface starting up..." Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193050647Z" level=info msg="starting plugins..." Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193068821Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193157217Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193232708Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 13:01:05.193400 containerd[1563]: time="2025-07-09T13:01:05.193302880Z" level=info msg="containerd successfully booted in 0.248890s" Jul 9 13:01:05.193580 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 13:01:05.198894 tar[1542]: linux-amd64/LICENSE Jul 9 13:01:05.200706 tar[1542]: linux-amd64/README.md Jul 9 13:01:05.233587 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 13:01:05.881871 systemd-networkd[1485]: eth0: Gained IPv6LL Jul 9 13:01:05.885725 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 13:01:05.887841 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 13:01:05.890748 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 13:01:05.893407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:01:05.895737 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 13:01:05.927771 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 13:01:05.929784 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 13:01:05.930116 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 13:01:05.932763 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 13:01:07.350177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:01:07.352726 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 13:01:07.354340 systemd[1]: Startup finished in 3.301s (kernel) + 6.040s (initrd) + 5.953s (userspace) = 15.294s. Jul 9 13:01:07.367860 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:01:08.020890 kubelet[1668]: E0709 13:01:08.020758 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:01:08.024745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:01:08.024956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:01:08.025365 systemd[1]: kubelet.service: Consumed 1.901s CPU time, 267.1M memory peak. Jul 9 13:01:08.679316 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 13:01:08.680711 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:48728.service - OpenSSH per-connection server daemon (10.0.0.1:48728). Jul 9 13:01:08.758956 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 48728 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:01:08.760974 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:01:08.774432 systemd-logind[1538]: New session 1 of user core. Jul 9 13:01:08.775897 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 13:01:08.777306 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 13:01:08.800874 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 13:01:08.803656 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 13:01:08.824601 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 13:01:08.828010 systemd-logind[1538]: New session c1 of user core. Jul 9 13:01:08.980645 systemd[1686]: Queued start job for default target default.target. Jul 9 13:01:08.996676 systemd[1686]: Created slice app.slice - User Application Slice. Jul 9 13:01:08.996704 systemd[1686]: Reached target paths.target - Paths. Jul 9 13:01:08.996747 systemd[1686]: Reached target timers.target - Timers. Jul 9 13:01:08.998332 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 13:01:09.009724 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 13:01:09.009880 systemd[1686]: Reached target sockets.target - Sockets. Jul 9 13:01:09.009933 systemd[1686]: Reached target basic.target - Basic System. Jul 9 13:01:09.009977 systemd[1686]: Reached target default.target - Main User Target. Jul 9 13:01:09.010014 systemd[1686]: Startup finished in 172ms. Jul 9 13:01:09.010251 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 13:01:09.011883 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 13:01:09.078117 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:48736.service - OpenSSH per-connection server daemon (10.0.0.1:48736). Jul 9 13:01:09.146035 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 48736 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:01:09.147403 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:01:09.152321 systemd-logind[1538]: New session 2 of user core. Jul 9 13:01:09.165512 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 13:01:09.222006 sshd[1700]: Connection closed by 10.0.0.1 port 48736 Jul 9 13:01:09.222531 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jul 9 13:01:09.236502 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:48736.service: Deactivated successfully. Jul 9 13:01:09.238593 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 13:01:09.239418 systemd-logind[1538]: Session 2 logged out. Waiting for processes to exit. Jul 9 13:01:09.242610 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:48744.service - OpenSSH per-connection server daemon (10.0.0.1:48744). Jul 9 13:01:09.243172 systemd-logind[1538]: Removed session 2. Jul 9 13:01:09.309935 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 48744 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:01:09.311511 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:01:09.316410 systemd-logind[1538]: New session 3 of user core. Jul 9 13:01:09.332513 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 13:01:09.381818 sshd[1710]: Connection closed by 10.0.0.1 port 48744 Jul 9 13:01:09.382182 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 9 13:01:09.391720 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:48744.service: Deactivated successfully. Jul 9 13:01:09.393867 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 13:01:09.394634 systemd-logind[1538]: Session 3 logged out. Waiting for processes to exit. Jul 9 13:01:09.397792 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:48754.service - OpenSSH per-connection server daemon (10.0.0.1:48754). Jul 9 13:01:09.398414 systemd-logind[1538]: Removed session 3. Jul 9 13:01:09.453522 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 48754 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:01:09.454801 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:01:09.459252 systemd-logind[1538]: New session 4 of user core. Jul 9 13:01:09.471518 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 13:01:09.550936 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:48764.service - OpenSSH per-connection server daemon (10.0.0.1:48764). Jul 9 13:01:09.601927 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 48764 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:01:09.603985 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:01:09.609348 systemd-logind[1538]: New session 5 of user core. Jul 9 13:01:09.623536 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 13:01:09.683176 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 13:01:09.683546 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:01:09.712069 sudo[1726]: pam_unix(sudo:session): session closed for user root Jul 9 13:01:09.734341 sshd[1719]: Connection closed by 10.0.0.1 port 48754 Jul 9 13:01:09.734762 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Jul 9 13:01:09.738607 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:48778.service - OpenSSH per-connection server daemon (10.0.0.1:48778). Jul 9 13:01:09.739168 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:48754.service: Deactivated successfully. Jul 9 13:01:09.741182 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 13:01:09.741973 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Jul 9 13:01:09.744353 systemd-logind[1538]: Removed session 4. Jul 9 13:01:09.794313 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 48778 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:01:09.795673 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:01:09.801182 systemd-logind[1538]: New session 6 of user core. Jul 9 13:01:09.820710 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 13:01:09.876634 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 13:01:09.876962 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:01:09.906895 sudo[1737]: pam_unix(sudo:session): session closed for user root Jul 9 13:01:09.913756 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 13:01:09.914062 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:01:09.925104 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 13:01:09.926390 sshd[1725]: Connection closed by 10.0.0.1 port 48764 Jul 9 13:01:09.926764 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Jul 9 13:01:09.931997 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:48764.service: Deactivated successfully. Jul 9 13:01:09.944868 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 13:01:09.945665 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Jul 9 13:01:09.947314 systemd-logind[1538]: Removed session 5. Jul 9 13:01:09.972650 augenrules[1762]: No rules Jul 9 13:01:09.973953 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 13:01:09.974292 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 13:01:09.975451 sudo[1736]: pam_unix(sudo:session): session closed for user root Jul 9 13:01:09.993022 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:48786.service - OpenSSH per-connection server daemon (10.0.0.1:48786). Jul 9 13:01:10.039576 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 48786 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:01:10.041092 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:01:10.045653 systemd-logind[1538]: New session 7 of user core. Jul 9 13:01:10.062526 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 13:01:10.116854 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 13:01:10.117176 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:01:10.190474 sshd[1735]: Connection closed by 10.0.0.1 port 48778 Jul 9 13:01:10.190879 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 9 13:01:10.197027 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:48778.service: Deactivated successfully. Jul 9 13:01:10.199209 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 13:01:10.200490 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Jul 9 13:01:10.202418 systemd-logind[1538]: Removed session 6. Jul 9 13:01:10.563718 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 13:01:10.591701 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 13:01:11.026097 dockerd[1795]: time="2025-07-09T13:01:11.026013852Z" level=info msg="Starting up" Jul 9 13:01:11.027019 dockerd[1795]: time="2025-07-09T13:01:11.026956711Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 13:01:11.058883 dockerd[1795]: time="2025-07-09T13:01:11.058777529Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 9 13:01:11.573283 dockerd[1795]: time="2025-07-09T13:01:11.573217405Z" level=info msg="Loading containers: start." Jul 9 13:01:11.584411 kernel: Initializing XFRM netlink socket Jul 9 13:01:11.860334 systemd-networkd[1485]: docker0: Link UP Jul 9 13:01:11.865912 dockerd[1795]: time="2025-07-09T13:01:11.865870072Z" level=info msg="Loading containers: done." Jul 9 13:01:11.883458 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck114369482-merged.mount: Deactivated successfully. Jul 9 13:01:11.887538 dockerd[1795]: time="2025-07-09T13:01:11.887495604Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 13:01:11.887611 dockerd[1795]: time="2025-07-09T13:01:11.887581204Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 9 13:01:11.887695 dockerd[1795]: time="2025-07-09T13:01:11.887674449Z" level=info msg="Initializing buildkit" Jul 9 13:01:11.917405 dockerd[1795]: time="2025-07-09T13:01:11.917328953Z" level=info msg="Completed buildkit initialization" Jul 9 13:01:11.921700 dockerd[1795]: time="2025-07-09T13:01:11.921660730Z" level=info msg="Daemon has completed initialization" Jul 9 13:01:11.921803 dockerd[1795]: time="2025-07-09T13:01:11.921735209Z" level=info msg="API listen on /run/docker.sock" Jul 9 13:01:11.921861 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 13:01:12.926867 containerd[1563]: time="2025-07-09T13:01:12.926811705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 9 13:01:13.540178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594944489.mount: Deactivated successfully. Jul 9 13:01:16.050398 containerd[1563]: time="2025-07-09T13:01:16.050283511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:16.050952 containerd[1563]: time="2025-07-09T13:01:16.050882805Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 9 13:01:16.052135 containerd[1563]: time="2025-07-09T13:01:16.052088477Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:16.054752 containerd[1563]: time="2025-07-09T13:01:16.054708352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:16.055538 containerd[1563]: time="2025-07-09T13:01:16.055490850Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 3.128626497s" Jul 9 13:01:16.055538 containerd[1563]: time="2025-07-09T13:01:16.055535173Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 9 13:01:16.056715 containerd[1563]: time="2025-07-09T13:01:16.056654934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 9 13:01:17.800351 containerd[1563]: time="2025-07-09T13:01:17.800282614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:17.801065 containerd[1563]: time="2025-07-09T13:01:17.801038101Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 9 13:01:17.802186 containerd[1563]: time="2025-07-09T13:01:17.802155127Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:17.804802 containerd[1563]: time="2025-07-09T13:01:17.804754684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:17.805585 containerd[1563]: time="2025-07-09T13:01:17.805540398Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.748833216s" Jul 9 13:01:17.805626 containerd[1563]: time="2025-07-09T13:01:17.805593217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 9 13:01:17.806168 containerd[1563]: time="2025-07-09T13:01:17.806135775Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 9 13:01:18.265261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 13:01:18.267558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:01:18.603531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:01:18.621813 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:01:18.690851 kubelet[2083]: E0709 13:01:18.690768 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:01:18.698726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:01:18.698968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:01:18.699397 systemd[1]: kubelet.service: Consumed 324ms CPU time, 112.9M memory peak. Jul 9 13:01:19.535400 containerd[1563]: time="2025-07-09T13:01:19.535324526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:19.536071 containerd[1563]: time="2025-07-09T13:01:19.535975899Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 9 13:01:19.537226 containerd[1563]: time="2025-07-09T13:01:19.537176451Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:19.540032 containerd[1563]: time="2025-07-09T13:01:19.540000529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:19.540845 containerd[1563]: time="2025-07-09T13:01:19.540803836Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.734636252s" Jul 9 13:01:19.540880 containerd[1563]: time="2025-07-09T13:01:19.540849542Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 9 13:01:19.541446 containerd[1563]: time="2025-07-09T13:01:19.541411987Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 9 13:01:20.626105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475249840.mount: Deactivated successfully. Jul 9 13:01:21.070237 containerd[1563]: time="2025-07-09T13:01:21.070147975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:21.070956 containerd[1563]: time="2025-07-09T13:01:21.070922608Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 9 13:01:21.072099 containerd[1563]: time="2025-07-09T13:01:21.072065011Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:21.074170 containerd[1563]: time="2025-07-09T13:01:21.074121339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:21.074585 containerd[1563]: time="2025-07-09T13:01:21.074552879Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.53311353s" Jul 9 13:01:21.074626 containerd[1563]: time="2025-07-09T13:01:21.074585460Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 9 13:01:21.075198 containerd[1563]: time="2025-07-09T13:01:21.075150450Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 13:01:21.579460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946416497.mount: Deactivated successfully. Jul 9 13:01:22.696543 containerd[1563]: time="2025-07-09T13:01:22.696452789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:22.697361 containerd[1563]: time="2025-07-09T13:01:22.697295941Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 9 13:01:22.698471 containerd[1563]: time="2025-07-09T13:01:22.698433094Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:22.701652 containerd[1563]: time="2025-07-09T13:01:22.701587392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:22.702776 containerd[1563]: time="2025-07-09T13:01:22.702738151Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.627549579s" Jul 9 13:01:22.702776 containerd[1563]: time="2025-07-09T13:01:22.702770020Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 9 13:01:22.703298 containerd[1563]: time="2025-07-09T13:01:22.703270469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 13:01:23.192631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872119505.mount: Deactivated successfully. Jul 9 13:01:23.199895 containerd[1563]: time="2025-07-09T13:01:23.199842965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:01:23.200645 containerd[1563]: time="2025-07-09T13:01:23.200586329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 9 13:01:23.201859 containerd[1563]: time="2025-07-09T13:01:23.201807120Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:01:23.204030 containerd[1563]: time="2025-07-09T13:01:23.203993932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:01:23.204535 containerd[1563]: time="2025-07-09T13:01:23.204507175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.206278ms" Jul 9 13:01:23.204615 containerd[1563]: time="2025-07-09T13:01:23.204534957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 13:01:23.205226 containerd[1563]: time="2025-07-09T13:01:23.205179657Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 9 13:01:23.741886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546083468.mount: Deactivated successfully. Jul 9 13:01:25.481238 containerd[1563]: time="2025-07-09T13:01:25.481159928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:25.481941 containerd[1563]: time="2025-07-09T13:01:25.481892452Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 9 13:01:25.483011 containerd[1563]: time="2025-07-09T13:01:25.482982527Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:25.486024 containerd[1563]: time="2025-07-09T13:01:25.485978308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:25.487034 containerd[1563]: time="2025-07-09T13:01:25.486985407Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.281772117s" Jul 9 13:01:25.487034 containerd[1563]: time="2025-07-09T13:01:25.487018339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 9 13:01:28.123159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:01:28.123440 systemd[1]: kubelet.service: Consumed 324ms CPU time, 112.9M memory peak. Jul 9 13:01:28.127316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:01:28.166597 systemd[1]: Reload requested from client PID 2241 ('systemctl') (unit session-7.scope)... Jul 9 13:01:28.166640 systemd[1]: Reloading... Jul 9 13:01:28.261409 zram_generator::config[2283]: No configuration found. Jul 9 13:01:28.361445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:01:28.477822 systemd[1]: Reloading finished in 310 ms. Jul 9 13:01:28.541468 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 13:01:28.541595 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 13:01:28.541955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:01:28.543735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:01:28.773504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:01:28.779114 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 13:01:28.941454 kubelet[2330]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:01:28.941454 kubelet[2330]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 13:01:28.941454 kubelet[2330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:01:28.941956 kubelet[2330]: I0709 13:01:28.941544 2330 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 13:01:29.138065 kubelet[2330]: I0709 13:01:29.137941 2330 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 13:01:29.138065 kubelet[2330]: I0709 13:01:29.137973 2330 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 13:01:29.138315 kubelet[2330]: I0709 13:01:29.138285 2330 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 13:01:29.170743 kubelet[2330]: E0709 13:01:29.170687 2330 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:01:29.171600 kubelet[2330]: I0709 13:01:29.171573 2330 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 13:01:29.179814 kubelet[2330]: I0709 13:01:29.179765 2330 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 13:01:29.186524 kubelet[2330]: I0709 13:01:29.186487 2330 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 13:01:29.187903 kubelet[2330]: I0709 13:01:29.187871 2330 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 13:01:29.188042 kubelet[2330]: I0709 13:01:29.188009 2330 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 13:01:29.188196 kubelet[2330]: I0709 13:01:29.188028 2330 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 13:01:29.188327 kubelet[2330]: I0709 13:01:29.188216 2330 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 13:01:29.188327 kubelet[2330]: I0709 13:01:29.188225 2330 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 13:01:29.188403 kubelet[2330]: I0709 13:01:29.188360 2330 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:01:29.191706 kubelet[2330]: I0709 13:01:29.191669 2330 kubelet.go:408] "Attempting to sync node with API server" Jul 9 13:01:29.191706 kubelet[2330]: I0709 13:01:29.191699 2330 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 13:01:29.191795 kubelet[2330]: I0709 13:01:29.191757 2330 kubelet.go:314] "Adding apiserver pod source" Jul 9 13:01:29.191832 kubelet[2330]: I0709 13:01:29.191796 2330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 13:01:29.197396 kubelet[2330]: W0709 13:01:29.196290 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Jul 9 13:01:29.198204 kubelet[2330]: W0709 13:01:29.198154 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Jul 9 13:01:29.198204 kubelet[2330]: E0709 13:01:29.198189 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:01:29.198291 kubelet[2330]: E0709 13:01:29.198212 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:01:29.201846 kubelet[2330]: I0709 13:01:29.201814 2330 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 13:01:29.202818 kubelet[2330]: I0709 13:01:29.202784 2330 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 13:01:29.202974 kubelet[2330]: W0709 13:01:29.202887 2330 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 13:01:29.205404 kubelet[2330]: I0709 13:01:29.205362 2330 server.go:1274] "Started kubelet" Jul 9 13:01:29.205519 kubelet[2330]: I0709 13:01:29.205482 2330 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 13:01:29.206814 kubelet[2330]: I0709 13:01:29.206784 2330 server.go:449] "Adding debug handlers to kubelet server" Jul 9 13:01:29.207595 kubelet[2330]: I0709 13:01:29.207569 2330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 13:01:29.208973 kubelet[2330]: I0709 13:01:29.208926 2330 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 13:01:29.209208 kubelet[2330]: I0709 13:01:29.209177 2330 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 13:01:29.209825 kubelet[2330]: I0709 13:01:29.209612 2330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 13:01:29.212731 kubelet[2330]: E0709 13:01:29.212700 2330 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 13:01:29.212823 kubelet[2330]: E0709 13:01:29.212799 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:01:29.212881 kubelet[2330]: I0709 13:01:29.212846 2330 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 13:01:29.213124 kubelet[2330]: I0709 13:01:29.213101 2330 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 13:01:29.213364 kubelet[2330]: I0709 13:01:29.213191 2330 reconciler.go:26] "Reconciler: start to sync state" Jul 9 13:01:29.213364 kubelet[2330]: E0709 13:01:29.213186 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Jul 9 13:01:29.213829 kubelet[2330]: W0709 13:01:29.213792 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Jul 9 13:01:29.213886 kubelet[2330]: E0709 13:01:29.213836 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:01:29.214198 kubelet[2330]: I0709 13:01:29.214154 2330 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 13:01:29.216917 kubelet[2330]: E0709 13:01:29.215443 2330 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185096d346177405 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 13:01:29.205330949 +0000 UTC m=+0.417617151,LastTimestamp:2025-07-09 13:01:29.205330949 +0000 UTC m=+0.417617151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 13:01:29.217246 kubelet[2330]: I0709 13:01:29.217145 2330 factory.go:221] Registration of the containerd container factory successfully Jul 9 13:01:29.217246 kubelet[2330]: I0709 13:01:29.217165 2330 factory.go:221] Registration of the systemd container factory successfully Jul 9 13:01:29.228516 kubelet[2330]: I0709 13:01:29.228470 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 13:01:29.230490 kubelet[2330]: I0709 13:01:29.230463 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 13:01:29.230556 kubelet[2330]: I0709 13:01:29.230502 2330 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 13:01:29.230556 kubelet[2330]: I0709 13:01:29.230537 2330 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 13:01:29.230613 kubelet[2330]: E0709 13:01:29.230587 2330 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 13:01:29.236312 kubelet[2330]: W0709 13:01:29.236261 2330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Jul 9 13:01:29.236407 kubelet[2330]: E0709 13:01:29.236320 2330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:01:29.236486 kubelet[2330]: I0709 13:01:29.236463 2330 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 13:01:29.236486 kubelet[2330]: I0709 13:01:29.236476 2330 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 13:01:29.236553 kubelet[2330]: I0709 13:01:29.236494 2330 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:01:29.271860 kubelet[2330]: I0709 13:01:29.271825 2330 policy_none.go:49] "None policy: Start" Jul 9 13:01:29.272393 kubelet[2330]: I0709 13:01:29.272353 2330 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 13:01:29.272442 kubelet[2330]: I0709 13:01:29.272403 2330 state_mem.go:35] "Initializing new in-memory state store" Jul 9 13:01:29.281031 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 13:01:29.294500 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 13:01:29.298283 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 13:01:29.313361 kubelet[2330]: E0709 13:01:29.313336 2330 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:01:29.315362 kubelet[2330]: I0709 13:01:29.315310 2330 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 13:01:29.315618 kubelet[2330]: I0709 13:01:29.315596 2330 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 13:01:29.315842 kubelet[2330]: I0709 13:01:29.315624 2330 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 13:01:29.315842 kubelet[2330]: I0709 13:01:29.315791 2330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 13:01:29.317468 kubelet[2330]: E0709 13:01:29.317434 2330 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 13:01:29.342956 systemd[1]: Created slice kubepods-burstable-pod33a46577b83871a1bfcd83db4f81e47b.slice - libcontainer container kubepods-burstable-pod33a46577b83871a1bfcd83db4f81e47b.slice. Jul 9 13:01:29.368150 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 9 13:01:29.389318 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 9 13:01:29.414598 kubelet[2330]: E0709 13:01:29.414554 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Jul 9 13:01:29.417780 kubelet[2330]: I0709 13:01:29.417719 2330 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 13:01:29.418153 kubelet[2330]: E0709 13:01:29.418109 2330 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Jul 9 13:01:29.514888 kubelet[2330]: I0709 13:01:29.514770 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 9 13:01:29.515029 kubelet[2330]: I0709 13:01:29.514900 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33a46577b83871a1bfcd83db4f81e47b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"33a46577b83871a1bfcd83db4f81e47b\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:01:29.515029 kubelet[2330]: I0709 13:01:29.515000 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:29.515139 kubelet[2330]: I0709 13:01:29.515069 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:29.515139 kubelet[2330]: I0709 13:01:29.515121 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:29.515244 kubelet[2330]: I0709 13:01:29.515148 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:29.515244 kubelet[2330]: I0709 13:01:29.515189 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33a46577b83871a1bfcd83db4f81e47b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"33a46577b83871a1bfcd83db4f81e47b\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:01:29.515244 kubelet[2330]: I0709 13:01:29.515238 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33a46577b83871a1bfcd83db4f81e47b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"33a46577b83871a1bfcd83db4f81e47b\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:01:29.515329 kubelet[2330]: I0709 13:01:29.515265 2330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:29.620865 kubelet[2330]: I0709 13:01:29.620822 2330 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 13:01:29.621340 kubelet[2330]: E0709 13:01:29.621270 2330 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Jul 9 13:01:29.664883 kubelet[2330]: E0709 13:01:29.664710 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:29.665854 containerd[1563]: time="2025-07-09T13:01:29.665794050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:33a46577b83871a1bfcd83db4f81e47b,Namespace:kube-system,Attempt:0,}" Jul 9 13:01:29.687124 kubelet[2330]: E0709 13:01:29.687067 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:29.687450 containerd[1563]: time="2025-07-09T13:01:29.687417848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 9 13:01:29.692899 kubelet[2330]: E0709 13:01:29.692869 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:29.693460 containerd[1563]: time="2025-07-09T13:01:29.693426611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 9 13:01:29.696504 containerd[1563]: time="2025-07-09T13:01:29.696437590Z" level=info msg="connecting to shim 39c4eb0084ca8364f2cb96ef18addb301d3697d4fd2aff33c672df70aa7d6fea" address="unix:///run/containerd/s/1f6bff975b774d9171317b59227a85d059c204eeff01a169220708aa7740a092" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:01:29.731591 containerd[1563]: time="2025-07-09T13:01:29.731533432Z" level=info msg="connecting to shim 61dea6f778fa64047da01f5a19ecc6e6b21d7a8cb375dc8cc51109d30b6d8e16" address="unix:///run/containerd/s/e64961d4a9960f278b1b5126980fe49c1c5f956f46a7dded90566445ff8c5302" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:01:29.743429 containerd[1563]: time="2025-07-09T13:01:29.742327352Z" level=info msg="connecting to shim c5d201f4e3da88ca892ad6f788d0fdae8eae3c3d1f856b4e28b5295bce78e822" address="unix:///run/containerd/s/68c8676884d98b2057d1253c30838424c5dd3b23147144541a61f05773ebc04c" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:01:29.744612 systemd[1]: Started cri-containerd-39c4eb0084ca8364f2cb96ef18addb301d3697d4fd2aff33c672df70aa7d6fea.scope - libcontainer container 39c4eb0084ca8364f2cb96ef18addb301d3697d4fd2aff33c672df70aa7d6fea. Jul 9 13:01:29.804489 systemd[1]: Started cri-containerd-c5d201f4e3da88ca892ad6f788d0fdae8eae3c3d1f856b4e28b5295bce78e822.scope - libcontainer container c5d201f4e3da88ca892ad6f788d0fdae8eae3c3d1f856b4e28b5295bce78e822. Jul 9 13:01:29.809003 systemd[1]: Started cri-containerd-61dea6f778fa64047da01f5a19ecc6e6b21d7a8cb375dc8cc51109d30b6d8e16.scope - libcontainer container 61dea6f778fa64047da01f5a19ecc6e6b21d7a8cb375dc8cc51109d30b6d8e16. Jul 9 13:01:29.816852 kubelet[2330]: E0709 13:01:29.816688 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Jul 9 13:01:29.829895 containerd[1563]: time="2025-07-09T13:01:29.829763319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:33a46577b83871a1bfcd83db4f81e47b,Namespace:kube-system,Attempt:0,} returns sandbox id \"39c4eb0084ca8364f2cb96ef18addb301d3697d4fd2aff33c672df70aa7d6fea\"" Jul 9 13:01:29.831554 kubelet[2330]: E0709 13:01:29.831497 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:29.834141 containerd[1563]: time="2025-07-09T13:01:29.834100686Z" level=info msg="CreateContainer within sandbox \"39c4eb0084ca8364f2cb96ef18addb301d3697d4fd2aff33c672df70aa7d6fea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 13:01:29.845919 containerd[1563]: time="2025-07-09T13:01:29.845099600Z" level=info msg="Container cef91a69fbd31f067ccf80a93f490c194e6336fd029985f238381f49ef70cb7c: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:01:29.851666 containerd[1563]: time="2025-07-09T13:01:29.851617539Z" level=info msg="CreateContainer within sandbox \"39c4eb0084ca8364f2cb96ef18addb301d3697d4fd2aff33c672df70aa7d6fea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cef91a69fbd31f067ccf80a93f490c194e6336fd029985f238381f49ef70cb7c\"" Jul 9 13:01:29.853033 containerd[1563]: time="2025-07-09T13:01:29.853013237Z" level=info msg="StartContainer for \"cef91a69fbd31f067ccf80a93f490c194e6336fd029985f238381f49ef70cb7c\"" Jul 9 13:01:29.860725 containerd[1563]: time="2025-07-09T13:01:29.860695980Z" level=info msg="connecting to shim cef91a69fbd31f067ccf80a93f490c194e6336fd029985f238381f49ef70cb7c" address="unix:///run/containerd/s/1f6bff975b774d9171317b59227a85d059c204eeff01a169220708aa7740a092" protocol=ttrpc version=3 Jul 9 13:01:29.882839 containerd[1563]: time="2025-07-09T13:01:29.882807764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5d201f4e3da88ca892ad6f788d0fdae8eae3c3d1f856b4e28b5295bce78e822\"" Jul 9 13:01:29.883616 kubelet[2330]: E0709 13:01:29.883594 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:29.886297 containerd[1563]: time="2025-07-09T13:01:29.886260942Z" level=info msg="CreateContainer within sandbox \"c5d201f4e3da88ca892ad6f788d0fdae8eae3c3d1f856b4e28b5295bce78e822\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 13:01:29.890718 systemd[1]: Started cri-containerd-cef91a69fbd31f067ccf80a93f490c194e6336fd029985f238381f49ef70cb7c.scope - libcontainer container cef91a69fbd31f067ccf80a93f490c194e6336fd029985f238381f49ef70cb7c. Jul 9 13:01:29.895787 containerd[1563]: time="2025-07-09T13:01:29.895763109Z" level=info msg="Container 94ad23c7943141d7b425971c1fb3d8d4a22138b0d6abdb3b76bf80d19e356d1c: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:01:29.896156 containerd[1563]: time="2025-07-09T13:01:29.896105892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"61dea6f778fa64047da01f5a19ecc6e6b21d7a8cb375dc8cc51109d30b6d8e16\"" Jul 9 13:01:29.896978 kubelet[2330]: E0709 13:01:29.896949 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:29.898792 containerd[1563]: time="2025-07-09T13:01:29.898737579Z" level=info msg="CreateContainer within sandbox \"61dea6f778fa64047da01f5a19ecc6e6b21d7a8cb375dc8cc51109d30b6d8e16\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 13:01:29.911735 containerd[1563]: time="2025-07-09T13:01:29.911697553Z" level=info msg="CreateContainer within sandbox \"c5d201f4e3da88ca892ad6f788d0fdae8eae3c3d1f856b4e28b5295bce78e822\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"94ad23c7943141d7b425971c1fb3d8d4a22138b0d6abdb3b76bf80d19e356d1c\"" Jul 9 13:01:29.912250 containerd[1563]: time="2025-07-09T13:01:29.912221025Z" level=info msg="StartContainer for \"94ad23c7943141d7b425971c1fb3d8d4a22138b0d6abdb3b76bf80d19e356d1c\"" Jul 9 13:01:29.913531 containerd[1563]: time="2025-07-09T13:01:29.913505365Z" level=info msg="connecting to shim 94ad23c7943141d7b425971c1fb3d8d4a22138b0d6abdb3b76bf80d19e356d1c" address="unix:///run/containerd/s/68c8676884d98b2057d1253c30838424c5dd3b23147144541a61f05773ebc04c" protocol=ttrpc version=3 Jul 9 13:01:29.915438 containerd[1563]: time="2025-07-09T13:01:29.915330028Z" level=info msg="Container f4e717134a72b09cf08c61b0999b3f3a584054288b0143e048fbd02476fc04d8: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:01:29.924390 containerd[1563]: time="2025-07-09T13:01:29.924332156Z" level=info msg="CreateContainer within sandbox \"61dea6f778fa64047da01f5a19ecc6e6b21d7a8cb375dc8cc51109d30b6d8e16\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4e717134a72b09cf08c61b0999b3f3a584054288b0143e048fbd02476fc04d8\"" Jul 9 13:01:29.924986 containerd[1563]: time="2025-07-09T13:01:29.924950547Z" level=info msg="StartContainer for \"f4e717134a72b09cf08c61b0999b3f3a584054288b0143e048fbd02476fc04d8\"" Jul 9 13:01:29.926208 containerd[1563]: time="2025-07-09T13:01:29.926160166Z" level=info msg="connecting to shim f4e717134a72b09cf08c61b0999b3f3a584054288b0143e048fbd02476fc04d8" address="unix:///run/containerd/s/e64961d4a9960f278b1b5126980fe49c1c5f956f46a7dded90566445ff8c5302" protocol=ttrpc version=3 Jul 9 13:01:29.945522 systemd[1]: Started cri-containerd-94ad23c7943141d7b425971c1fb3d8d4a22138b0d6abdb3b76bf80d19e356d1c.scope - libcontainer container 94ad23c7943141d7b425971c1fb3d8d4a22138b0d6abdb3b76bf80d19e356d1c. Jul 9 13:01:29.949507 systemd[1]: Started cri-containerd-f4e717134a72b09cf08c61b0999b3f3a584054288b0143e048fbd02476fc04d8.scope - libcontainer container f4e717134a72b09cf08c61b0999b3f3a584054288b0143e048fbd02476fc04d8. Jul 9 13:01:29.981522 containerd[1563]: time="2025-07-09T13:01:29.981391514Z" level=info msg="StartContainer for \"cef91a69fbd31f067ccf80a93f490c194e6336fd029985f238381f49ef70cb7c\" returns successfully" Jul 9 13:01:30.026561 kubelet[2330]: I0709 13:01:30.026523 2330 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 13:01:30.386343 containerd[1563]: time="2025-07-09T13:01:30.386294461Z" level=info msg="StartContainer for \"f4e717134a72b09cf08c61b0999b3f3a584054288b0143e048fbd02476fc04d8\" returns successfully" Jul 9 13:01:30.386583 containerd[1563]: time="2025-07-09T13:01:30.386556102Z" level=info msg="StartContainer for \"94ad23c7943141d7b425971c1fb3d8d4a22138b0d6abdb3b76bf80d19e356d1c\" returns successfully" Jul 9 13:01:30.393043 kubelet[2330]: E0709 13:01:30.393008 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:30.397393 kubelet[2330]: E0709 13:01:30.397216 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:30.397587 kubelet[2330]: E0709 13:01:30.397562 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:31.406403 kubelet[2330]: E0709 13:01:31.401838 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:31.652623 kubelet[2330]: E0709 13:01:31.652563 2330 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 13:01:31.731743 kubelet[2330]: I0709 13:01:31.731460 2330 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 9 13:01:31.731743 kubelet[2330]: E0709 13:01:31.731522 2330 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 9 13:01:32.198853 kubelet[2330]: I0709 13:01:32.198804 2330 apiserver.go:52] "Watching apiserver" Jul 9 13:01:32.213674 kubelet[2330]: I0709 13:01:32.213636 2330 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 13:01:32.729699 kubelet[2330]: E0709 13:01:32.729630 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:33.297999 kubelet[2330]: E0709 13:01:33.297960 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:33.403391 kubelet[2330]: E0709 13:01:33.403347 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:33.403575 kubelet[2330]: E0709 13:01:33.403541 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:34.352204 systemd[1]: Reload requested from client PID 2607 ('systemctl') (unit session-7.scope)... Jul 9 13:01:34.352223 systemd[1]: Reloading... Jul 9 13:01:34.438429 zram_generator::config[2652]: No configuration found. Jul 9 13:01:34.531447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:01:34.667318 systemd[1]: Reloading finished in 314 ms. Jul 9 13:01:34.693353 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:01:34.711345 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 13:01:34.711682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:01:34.711744 systemd[1]: kubelet.service: Consumed 905ms CPU time, 131.2M memory peak. Jul 9 13:01:34.713752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:01:34.946225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:01:34.966934 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 13:01:35.040867 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:01:35.040867 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 13:01:35.040867 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:01:35.041291 kubelet[2695]: I0709 13:01:35.040935 2695 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 13:01:35.051149 kubelet[2695]: I0709 13:01:35.051095 2695 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 13:01:35.051149 kubelet[2695]: I0709 13:01:35.051128 2695 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 13:01:35.051471 kubelet[2695]: I0709 13:01:35.051451 2695 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 13:01:35.052798 kubelet[2695]: I0709 13:01:35.052771 2695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 13:01:35.054851 kubelet[2695]: I0709 13:01:35.054784 2695 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 13:01:35.059009 kubelet[2695]: I0709 13:01:35.058967 2695 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 13:01:35.063630 kubelet[2695]: I0709 13:01:35.063606 2695 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 13:01:35.063765 kubelet[2695]: I0709 13:01:35.063741 2695 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 13:01:35.063917 kubelet[2695]: I0709 13:01:35.063882 2695 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 13:01:35.066089 kubelet[2695]: I0709 13:01:35.063909 2695 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 13:01:35.066089 kubelet[2695]: I0709 13:01:35.065785 2695 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 13:01:35.066089 kubelet[2695]: I0709 13:01:35.065799 2695 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 13:01:35.066089 kubelet[2695]: I0709 13:01:35.065840 2695 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:01:35.066268 kubelet[2695]: I0709 13:01:35.066066 2695 kubelet.go:408] "Attempting to sync node with API server" Jul 9 13:01:35.066268 kubelet[2695]: I0709 13:01:35.066194 2695 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 13:01:35.066268 kubelet[2695]: I0709 13:01:35.066251 2695 kubelet.go:314] "Adding apiserver pod source" Jul 9 13:01:35.066268 kubelet[2695]: I0709 13:01:35.066267 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 13:01:35.069458 kubelet[2695]: I0709 13:01:35.069428 2695 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 13:01:35.070030 kubelet[2695]: I0709 13:01:35.070002 2695 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 13:01:35.071034 kubelet[2695]: I0709 13:01:35.071008 2695 server.go:1274] "Started kubelet" Jul 9 13:01:35.073465 kubelet[2695]: I0709 13:01:35.073433 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 13:01:35.073676 kubelet[2695]: I0709 13:01:35.073655 2695 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 13:01:35.073874 kubelet[2695]: I0709 13:01:35.073860 2695 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 13:01:35.076674 kubelet[2695]: I0709 13:01:35.076656 2695 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 13:01:35.077534 kubelet[2695]: I0709 13:01:35.074090 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 13:01:35.077534 kubelet[2695]: I0709 13:01:35.074036 2695 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 13:01:35.077534 kubelet[2695]: I0709 13:01:35.077228 2695 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 13:01:35.077534 kubelet[2695]: I0709 13:01:35.076973 2695 reconciler.go:26] "Reconciler: start to sync state" Jul 9 13:01:35.077818 kubelet[2695]: E0709 13:01:35.076722 2695 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:01:35.079332 kubelet[2695]: I0709 13:01:35.079296 2695 server.go:449] "Adding debug handlers to kubelet server" Jul 9 13:01:35.080329 kubelet[2695]: E0709 13:01:35.080203 2695 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 13:01:35.080556 kubelet[2695]: I0709 13:01:35.080527 2695 factory.go:221] Registration of the containerd container factory successfully Jul 9 13:01:35.080556 kubelet[2695]: I0709 13:01:35.080548 2695 factory.go:221] Registration of the systemd container factory successfully Jul 9 13:01:35.080682 kubelet[2695]: I0709 13:01:35.080632 2695 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 13:01:35.091212 kubelet[2695]: I0709 13:01:35.091156 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 13:01:35.092659 kubelet[2695]: I0709 13:01:35.092636 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 13:01:35.092758 kubelet[2695]: I0709 13:01:35.092746 2695 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 13:01:35.092834 kubelet[2695]: I0709 13:01:35.092824 2695 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 13:01:35.092938 kubelet[2695]: E0709 13:01:35.092918 2695 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 13:01:35.121361 kubelet[2695]: I0709 13:01:35.121326 2695 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 13:01:35.121361 kubelet[2695]: I0709 13:01:35.121343 2695 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 13:01:35.121361 kubelet[2695]: I0709 13:01:35.121362 2695 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:01:35.121563 kubelet[2695]: I0709 13:01:35.121537 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 13:01:35.121563 kubelet[2695]: I0709 13:01:35.121547 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 13:01:35.121620 kubelet[2695]: I0709 13:01:35.121564 2695 policy_none.go:49] "None policy: Start" Jul 9 13:01:35.122150 kubelet[2695]: I0709 13:01:35.122126 2695 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 13:01:35.122207 kubelet[2695]: I0709 13:01:35.122159 2695 state_mem.go:35] "Initializing new in-memory state store" Jul 9 13:01:35.122352 kubelet[2695]: I0709 13:01:35.122334 2695 state_mem.go:75] "Updated machine memory state" Jul 9 13:01:35.127599 kubelet[2695]: I0709 13:01:35.127101 2695 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 13:01:35.127599 kubelet[2695]: I0709 13:01:35.127281 2695 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 13:01:35.127599 kubelet[2695]: I0709 13:01:35.127291 2695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 13:01:35.127599 kubelet[2695]: I0709 13:01:35.127471 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 13:01:35.233351 kubelet[2695]: I0709 13:01:35.233232 2695 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 13:01:35.278576 kubelet[2695]: I0709 13:01:35.278504 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:35.278576 kubelet[2695]: I0709 13:01:35.278552 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33a46577b83871a1bfcd83db4f81e47b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"33a46577b83871a1bfcd83db4f81e47b\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:01:35.278576 kubelet[2695]: I0709 13:01:35.278573 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33a46577b83871a1bfcd83db4f81e47b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"33a46577b83871a1bfcd83db4f81e47b\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:01:35.278825 kubelet[2695]: I0709 13:01:35.278616 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:35.278825 kubelet[2695]: I0709 13:01:35.278637 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:35.278825 kubelet[2695]: I0709 13:01:35.278652 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:35.278825 kubelet[2695]: I0709 13:01:35.278666 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:35.278825 kubelet[2695]: I0709 13:01:35.278683 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 9 13:01:35.278984 kubelet[2695]: I0709 13:01:35.278697 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33a46577b83871a1bfcd83db4f81e47b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"33a46577b83871a1bfcd83db4f81e47b\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:01:35.503823 kubelet[2695]: E0709 13:01:35.503504 2695 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 13:01:35.503823 kubelet[2695]: E0709 13:01:35.503713 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:35.504233 kubelet[2695]: E0709 13:01:35.503887 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:35.504233 kubelet[2695]: E0709 13:01:35.504075 2695 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:01:35.504739 kubelet[2695]: E0709 13:01:35.504683 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:35.505329 kubelet[2695]: I0709 13:01:35.505308 2695 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 9 13:01:35.505401 kubelet[2695]: I0709 13:01:35.505363 2695 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 9 13:01:36.066933 kubelet[2695]: I0709 13:01:36.066880 2695 apiserver.go:52] "Watching apiserver" Jul 9 13:01:36.077781 kubelet[2695]: I0709 13:01:36.077753 2695 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 13:01:36.110083 kubelet[2695]: E0709 13:01:36.110026 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:36.112785 kubelet[2695]: E0709 13:01:36.111685 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:36.122060 kubelet[2695]: E0709 13:01:36.122014 2695 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 13:01:36.122235 kubelet[2695]: E0709 13:01:36.122212 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:36.172462 kubelet[2695]: I0709 13:01:36.172174 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.172137585 podStartE2EDuration="4.172137585s" podCreationTimestamp="2025-07-09 13:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:01:36.172012611 +0000 UTC m=+1.199966394" watchObservedRunningTime="2025-07-09 13:01:36.172137585 +0000 UTC m=+1.200091368" Jul 9 13:01:36.172681 kubelet[2695]: I0709 13:01:36.172534 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.172526094 podStartE2EDuration="1.172526094s" podCreationTimestamp="2025-07-09 13:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:01:36.164124342 +0000 UTC m=+1.192078125" watchObservedRunningTime="2025-07-09 13:01:36.172526094 +0000 UTC m=+1.200479877" Jul 9 13:01:36.179713 kubelet[2695]: I0709 13:01:36.179652 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.179626164 podStartE2EDuration="3.179626164s" podCreationTimestamp="2025-07-09 13:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:01:36.179599665 +0000 UTC m=+1.207553438" watchObservedRunningTime="2025-07-09 13:01:36.179626164 +0000 UTC m=+1.207579948" Jul 9 13:01:37.111230 kubelet[2695]: E0709 13:01:37.111176 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:37.111850 kubelet[2695]: E0709 13:01:37.111560 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:40.344323 kubelet[2695]: I0709 13:01:40.344282 2695 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 13:01:40.344786 containerd[1563]: time="2025-07-09T13:01:40.344630170Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 13:01:40.345027 kubelet[2695]: I0709 13:01:40.344818 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 13:01:41.084350 systemd[1]: Created slice kubepods-besteffort-pod926faaca_55de_48f7_a3b0_1d565ac01b67.slice - libcontainer container kubepods-besteffort-pod926faaca_55de_48f7_a3b0_1d565ac01b67.slice. Jul 9 13:01:41.115757 kubelet[2695]: I0709 13:01:41.115714 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/926faaca-55de-48f7-a3b0-1d565ac01b67-kube-proxy\") pod \"kube-proxy-4q9vd\" (UID: \"926faaca-55de-48f7-a3b0-1d565ac01b67\") " pod="kube-system/kube-proxy-4q9vd" Jul 9 13:01:41.115757 kubelet[2695]: I0709 13:01:41.115757 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/926faaca-55de-48f7-a3b0-1d565ac01b67-xtables-lock\") pod \"kube-proxy-4q9vd\" (UID: \"926faaca-55de-48f7-a3b0-1d565ac01b67\") " pod="kube-system/kube-proxy-4q9vd" Jul 9 13:01:41.115935 kubelet[2695]: I0709 13:01:41.115772 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/926faaca-55de-48f7-a3b0-1d565ac01b67-lib-modules\") pod \"kube-proxy-4q9vd\" (UID: \"926faaca-55de-48f7-a3b0-1d565ac01b67\") " pod="kube-system/kube-proxy-4q9vd" Jul 9 13:01:41.115935 kubelet[2695]: I0709 13:01:41.115787 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzwz4\" (UniqueName: \"kubernetes.io/projected/926faaca-55de-48f7-a3b0-1d565ac01b67-kube-api-access-gzwz4\") pod \"kube-proxy-4q9vd\" (UID: \"926faaca-55de-48f7-a3b0-1d565ac01b67\") " pod="kube-system/kube-proxy-4q9vd" Jul 9 13:01:41.220687 kubelet[2695]: E0709 13:01:41.220647 2695 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 13:01:41.220687 kubelet[2695]: E0709 13:01:41.220682 2695 projected.go:194] Error preparing data for projected volume kube-api-access-gzwz4 for pod kube-system/kube-proxy-4q9vd: configmap "kube-root-ca.crt" not found Jul 9 13:01:41.220866 kubelet[2695]: E0709 13:01:41.220749 2695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/926faaca-55de-48f7-a3b0-1d565ac01b67-kube-api-access-gzwz4 podName:926faaca-55de-48f7-a3b0-1d565ac01b67 nodeName:}" failed. No retries permitted until 2025-07-09 13:01:41.720721839 +0000 UTC m=+6.748675622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gzwz4" (UniqueName: "kubernetes.io/projected/926faaca-55de-48f7-a3b0-1d565ac01b67-kube-api-access-gzwz4") pod "kube-proxy-4q9vd" (UID: "926faaca-55de-48f7-a3b0-1d565ac01b67") : configmap "kube-root-ca.crt" not found Jul 9 13:01:41.351754 systemd[1]: Created slice kubepods-besteffort-pod1f426eb3_6fdb_4136_ae71_1409d489610a.slice - libcontainer container kubepods-besteffort-pod1f426eb3_6fdb_4136_ae71_1409d489610a.slice. Jul 9 13:01:41.417604 kubelet[2695]: I0709 13:01:41.417532 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f426eb3-6fdb-4136-ae71-1409d489610a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-c5j7k\" (UID: \"1f426eb3-6fdb-4136-ae71-1409d489610a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-c5j7k" Jul 9 13:01:41.417604 kubelet[2695]: I0709 13:01:41.417580 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgxm7\" (UniqueName: \"kubernetes.io/projected/1f426eb3-6fdb-4136-ae71-1409d489610a-kube-api-access-fgxm7\") pod \"tigera-operator-5bf8dfcb4-c5j7k\" (UID: \"1f426eb3-6fdb-4136-ae71-1409d489610a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-c5j7k" Jul 9 13:01:41.655854 containerd[1563]: time="2025-07-09T13:01:41.655694208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-c5j7k,Uid:1f426eb3-6fdb-4136-ae71-1409d489610a,Namespace:tigera-operator,Attempt:0,}" Jul 9 13:01:41.678100 containerd[1563]: time="2025-07-09T13:01:41.678048387Z" level=info msg="connecting to shim 7b20b7373c806c72ffcc4e94d3c8887ea98346d5fb760afd06aaf036ee1a3b3e" address="unix:///run/containerd/s/a265ed927d5ab2a72515c15646195c3feaf5985972f9327e091374ad3306159a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:01:41.684898 kubelet[2695]: E0709 13:01:41.684640 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:41.713503 systemd[1]: Started cri-containerd-7b20b7373c806c72ffcc4e94d3c8887ea98346d5fb760afd06aaf036ee1a3b3e.scope - libcontainer container 7b20b7373c806c72ffcc4e94d3c8887ea98346d5fb760afd06aaf036ee1a3b3e. Jul 9 13:01:41.763271 containerd[1563]: time="2025-07-09T13:01:41.763205619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-c5j7k,Uid:1f426eb3-6fdb-4136-ae71-1409d489610a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7b20b7373c806c72ffcc4e94d3c8887ea98346d5fb760afd06aaf036ee1a3b3e\"" Jul 9 13:01:41.765206 containerd[1563]: time="2025-07-09T13:01:41.765155811Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 9 13:01:41.998635 kubelet[2695]: E0709 13:01:41.998197 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:41.998775 containerd[1563]: time="2025-07-09T13:01:41.998715155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4q9vd,Uid:926faaca-55de-48f7-a3b0-1d565ac01b67,Namespace:kube-system,Attempt:0,}" Jul 9 13:01:42.025586 containerd[1563]: time="2025-07-09T13:01:42.025520523Z" level=info msg="connecting to shim 79a85f13994b38aeead39025fd7177bb38cd43ca8911ed598021e2d2ae3eb523" address="unix:///run/containerd/s/101f9490514f3c359fe64269b167a6921edd854064e5e2c24ef9a349a2c8026c" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:01:42.061508 systemd[1]: Started cri-containerd-79a85f13994b38aeead39025fd7177bb38cd43ca8911ed598021e2d2ae3eb523.scope - libcontainer container 79a85f13994b38aeead39025fd7177bb38cd43ca8911ed598021e2d2ae3eb523. Jul 9 13:01:42.090725 containerd[1563]: time="2025-07-09T13:01:42.090673390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4q9vd,Uid:926faaca-55de-48f7-a3b0-1d565ac01b67,Namespace:kube-system,Attempt:0,} returns sandbox id \"79a85f13994b38aeead39025fd7177bb38cd43ca8911ed598021e2d2ae3eb523\"" Jul 9 13:01:42.091397 kubelet[2695]: E0709 13:01:42.091330 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:42.093751 containerd[1563]: time="2025-07-09T13:01:42.093714422Z" level=info msg="CreateContainer within sandbox \"79a85f13994b38aeead39025fd7177bb38cd43ca8911ed598021e2d2ae3eb523\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 13:01:42.106701 containerd[1563]: time="2025-07-09T13:01:42.106650497Z" level=info msg="Container 352ac9ecc1e51011a320ccfb010cf26afd24ebc9aa455912c6c5e78fcefe6599: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:01:42.115040 containerd[1563]: time="2025-07-09T13:01:42.115002160Z" level=info msg="CreateContainer within sandbox \"79a85f13994b38aeead39025fd7177bb38cd43ca8911ed598021e2d2ae3eb523\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"352ac9ecc1e51011a320ccfb010cf26afd24ebc9aa455912c6c5e78fcefe6599\"" Jul 9 13:01:42.115739 containerd[1563]: time="2025-07-09T13:01:42.115702629Z" level=info msg="StartContainer for \"352ac9ecc1e51011a320ccfb010cf26afd24ebc9aa455912c6c5e78fcefe6599\"" Jul 9 13:01:42.119198 containerd[1563]: time="2025-07-09T13:01:42.119161089Z" level=info msg="connecting to shim 352ac9ecc1e51011a320ccfb010cf26afd24ebc9aa455912c6c5e78fcefe6599" address="unix:///run/containerd/s/101f9490514f3c359fe64269b167a6921edd854064e5e2c24ef9a349a2c8026c" protocol=ttrpc version=3 Jul 9 13:01:42.125156 kubelet[2695]: E0709 13:01:42.125132 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:42.143515 systemd[1]: Started cri-containerd-352ac9ecc1e51011a320ccfb010cf26afd24ebc9aa455912c6c5e78fcefe6599.scope - libcontainer container 352ac9ecc1e51011a320ccfb010cf26afd24ebc9aa455912c6c5e78fcefe6599. Jul 9 13:01:42.188540 containerd[1563]: time="2025-07-09T13:01:42.188485931Z" level=info msg="StartContainer for \"352ac9ecc1e51011a320ccfb010cf26afd24ebc9aa455912c6c5e78fcefe6599\" returns successfully" Jul 9 13:01:43.130418 kubelet[2695]: E0709 13:01:43.128920 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:43.300352 kubelet[2695]: I0709 13:01:43.300274 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4q9vd" podStartSLOduration=2.300228078 podStartE2EDuration="2.300228078s" podCreationTimestamp="2025-07-09 13:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:01:43.300075236 +0000 UTC m=+8.328029019" watchObservedRunningTime="2025-07-09 13:01:43.300228078 +0000 UTC m=+8.328181851" Jul 9 13:01:43.333683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780286400.mount: Deactivated successfully. Jul 9 13:01:43.675274 containerd[1563]: time="2025-07-09T13:01:43.675206803Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:43.676280 containerd[1563]: time="2025-07-09T13:01:43.676152910Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 9 13:01:43.677488 containerd[1563]: time="2025-07-09T13:01:43.677425149Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:43.679389 containerd[1563]: time="2025-07-09T13:01:43.679348961Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:01:43.679981 containerd[1563]: time="2025-07-09T13:01:43.679934920Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.914741819s" Jul 9 13:01:43.679981 containerd[1563]: time="2025-07-09T13:01:43.679977832Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 9 13:01:43.683040 containerd[1563]: time="2025-07-09T13:01:43.682999350Z" level=info msg="CreateContainer within sandbox \"7b20b7373c806c72ffcc4e94d3c8887ea98346d5fb760afd06aaf036ee1a3b3e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 9 13:01:43.690831 containerd[1563]: time="2025-07-09T13:01:43.690781848Z" level=info msg="Container 7496dfe00feb0a3711fbd26ca1ee192c170e6a8a4e3e45e168c91ea7cea92bb7: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:01:43.701358 containerd[1563]: time="2025-07-09T13:01:43.701212281Z" level=info msg="CreateContainer within sandbox \"7b20b7373c806c72ffcc4e94d3c8887ea98346d5fb760afd06aaf036ee1a3b3e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7496dfe00feb0a3711fbd26ca1ee192c170e6a8a4e3e45e168c91ea7cea92bb7\"" Jul 9 13:01:43.704078 containerd[1563]: time="2025-07-09T13:01:43.704018819Z" level=info msg="StartContainer for \"7496dfe00feb0a3711fbd26ca1ee192c170e6a8a4e3e45e168c91ea7cea92bb7\"" Jul 9 13:01:43.705569 containerd[1563]: time="2025-07-09T13:01:43.705502882Z" level=info msg="connecting to shim 7496dfe00feb0a3711fbd26ca1ee192c170e6a8a4e3e45e168c91ea7cea92bb7" address="unix:///run/containerd/s/a265ed927d5ab2a72515c15646195c3feaf5985972f9327e091374ad3306159a" protocol=ttrpc version=3 Jul 9 13:01:43.760529 systemd[1]: Started cri-containerd-7496dfe00feb0a3711fbd26ca1ee192c170e6a8a4e3e45e168c91ea7cea92bb7.scope - libcontainer container 7496dfe00feb0a3711fbd26ca1ee192c170e6a8a4e3e45e168c91ea7cea92bb7. Jul 9 13:01:43.796199 containerd[1563]: time="2025-07-09T13:01:43.796154947Z" level=info msg="StartContainer for \"7496dfe00feb0a3711fbd26ca1ee192c170e6a8a4e3e45e168c91ea7cea92bb7\" returns successfully" Jul 9 13:01:44.132734 kubelet[2695]: E0709 13:01:44.132679 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:44.140726 kubelet[2695]: I0709 13:01:44.140657 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-c5j7k" podStartSLOduration=1.224406883 podStartE2EDuration="3.140635772s" podCreationTimestamp="2025-07-09 13:01:41 +0000 UTC" firstStartedPulling="2025-07-09 13:01:41.764797084 +0000 UTC m=+6.792750857" lastFinishedPulling="2025-07-09 13:01:43.681025963 +0000 UTC m=+8.708979746" observedRunningTime="2025-07-09 13:01:44.140324468 +0000 UTC m=+9.168278251" watchObservedRunningTime="2025-07-09 13:01:44.140635772 +0000 UTC m=+9.168589545" Jul 9 13:01:45.101218 kubelet[2695]: E0709 13:01:45.101153 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:45.139208 kubelet[2695]: E0709 13:01:45.139159 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:46.963714 kubelet[2695]: E0709 13:01:46.963660 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:49.064669 update_engine[1539]: I20250709 13:01:49.064574 1539 update_attempter.cc:509] Updating boot flags... Jul 9 13:01:49.213711 sudo[1772]: pam_unix(sudo:session): session closed for user root Jul 9 13:01:49.216927 sshd[1771]: Connection closed by 10.0.0.1 port 48786 Jul 9 13:01:49.221502 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jul 9 13:01:49.258826 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:48786.service: Deactivated successfully. Jul 9 13:01:49.265341 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 13:01:49.265969 systemd[1]: session-7.scope: Consumed 4.965s CPU time, 224.2M memory peak. Jul 9 13:01:49.296575 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Jul 9 13:01:49.307291 systemd-logind[1538]: Removed session 7. Jul 9 13:01:52.225804 systemd[1]: Created slice kubepods-besteffort-pod4e439715_a0c9_4f24_aabb_6dc405647f64.slice - libcontainer container kubepods-besteffort-pod4e439715_a0c9_4f24_aabb_6dc405647f64.slice. Jul 9 13:01:52.386325 kubelet[2695]: I0709 13:01:52.386262 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4e439715-a0c9-4f24-aabb-6dc405647f64-typha-certs\") pod \"calico-typha-59b56bd8db-qvtl6\" (UID: \"4e439715-a0c9-4f24-aabb-6dc405647f64\") " pod="calico-system/calico-typha-59b56bd8db-qvtl6" Jul 9 13:01:52.386325 kubelet[2695]: I0709 13:01:52.386312 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htbgg\" (UniqueName: \"kubernetes.io/projected/4e439715-a0c9-4f24-aabb-6dc405647f64-kube-api-access-htbgg\") pod \"calico-typha-59b56bd8db-qvtl6\" (UID: \"4e439715-a0c9-4f24-aabb-6dc405647f64\") " pod="calico-system/calico-typha-59b56bd8db-qvtl6" Jul 9 13:01:52.386325 kubelet[2695]: I0709 13:01:52.386340 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e439715-a0c9-4f24-aabb-6dc405647f64-tigera-ca-bundle\") pod \"calico-typha-59b56bd8db-qvtl6\" (UID: \"4e439715-a0c9-4f24-aabb-6dc405647f64\") " pod="calico-system/calico-typha-59b56bd8db-qvtl6" Jul 9 13:01:52.531195 kubelet[2695]: E0709 13:01:52.531145 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:52.531816 containerd[1563]: time="2025-07-09T13:01:52.531769874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59b56bd8db-qvtl6,Uid:4e439715-a0c9-4f24-aabb-6dc405647f64,Namespace:calico-system,Attempt:0,}" Jul 9 13:01:52.656057 systemd[1]: Created slice kubepods-besteffort-podf3bd0e9d_a3b7_4b79_a977_f2eedca5af7a.slice - libcontainer container kubepods-besteffort-podf3bd0e9d_a3b7_4b79_a977_f2eedca5af7a.slice. Jul 9 13:01:52.659472 containerd[1563]: time="2025-07-09T13:01:52.659420927Z" level=info msg="connecting to shim afd5e482e28b600fd62bb4d67932d90d12bb995c24d784a0dc3ec2144cd4de08" address="unix:///run/containerd/s/c6a0bd7ea9d4c0d2fabdf091ac8e767f44820b28fe252b9fb772d7a9e4c6702c" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:01:52.695509 systemd[1]: Started cri-containerd-afd5e482e28b600fd62bb4d67932d90d12bb995c24d784a0dc3ec2144cd4de08.scope - libcontainer container afd5e482e28b600fd62bb4d67932d90d12bb995c24d784a0dc3ec2144cd4de08. Jul 9 13:01:52.743234 containerd[1563]: time="2025-07-09T13:01:52.743185834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59b56bd8db-qvtl6,Uid:4e439715-a0c9-4f24-aabb-6dc405647f64,Namespace:calico-system,Attempt:0,} returns sandbox id \"afd5e482e28b600fd62bb4d67932d90d12bb995c24d784a0dc3ec2144cd4de08\"" Jul 9 13:01:52.744157 kubelet[2695]: E0709 13:01:52.744107 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:01:52.745237 containerd[1563]: time="2025-07-09T13:01:52.745205790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 9 13:01:52.787779 kubelet[2695]: I0709 13:01:52.787636 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-cni-bin-dir\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.787779 kubelet[2695]: I0709 13:01:52.787672 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-node-certs\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.787779 kubelet[2695]: I0709 13:01:52.787692 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-tigera-ca-bundle\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.787779 kubelet[2695]: I0709 13:01:52.787714 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-flexvol-driver-host\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.787779 kubelet[2695]: I0709 13:01:52.787736 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-xtables-lock\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.788010 kubelet[2695]: I0709 13:01:52.787905 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-policysync\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.788044 kubelet[2695]: I0709 13:01:52.788013 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-var-lib-calico\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.788102 kubelet[2695]: I0709 13:01:52.788071 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-cni-net-dir\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.788134 kubelet[2695]: I0709 13:01:52.788103 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlgjv\" (UniqueName: \"kubernetes.io/projected/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-kube-api-access-rlgjv\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.788134 kubelet[2695]: I0709 13:01:52.788124 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-cni-log-dir\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.788189 kubelet[2695]: I0709 13:01:52.788140 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-lib-modules\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.788189 kubelet[2695]: I0709 13:01:52.788155 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a-var-run-calico\") pod \"calico-node-nsnl6\" (UID: \"f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a\") " pod="calico-system/calico-node-nsnl6" Jul 9 13:01:52.851888 kubelet[2695]: E0709 13:01:52.851817 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:01:52.890488 kubelet[2695]: E0709 13:01:52.890448 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.890488 kubelet[2695]: W0709 13:01:52.890475 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.890488 kubelet[2695]: E0709 13:01:52.890504 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.894487 kubelet[2695]: E0709 13:01:52.894448 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.894487 kubelet[2695]: W0709 13:01:52.894473 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.894487 kubelet[2695]: E0709 13:01:52.894497 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.897238 kubelet[2695]: E0709 13:01:52.897205 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.897238 kubelet[2695]: W0709 13:01:52.897227 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.897327 kubelet[2695]: E0709 13:01:52.897247 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.960063 containerd[1563]: time="2025-07-09T13:01:52.960004909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nsnl6,Uid:f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a,Namespace:calico-system,Attempt:0,}" Jul 9 13:01:52.982309 containerd[1563]: time="2025-07-09T13:01:52.982262848Z" level=info msg="connecting to shim 56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3" address="unix:///run/containerd/s/43e8785a741c8350c8666c13b3bc4d8d7470042445d71431d392ddc167f83cec" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:01:52.989710 kubelet[2695]: E0709 13:01:52.989666 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.989710 kubelet[2695]: W0709 13:01:52.989695 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.989710 kubelet[2695]: E0709 13:01:52.989719 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.989961 kubelet[2695]: I0709 13:01:52.989751 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pljl\" (UniqueName: \"kubernetes.io/projected/0a1ade6c-7a47-40a7-a38c-b0080894987b-kube-api-access-9pljl\") pod \"csi-node-driver-x4tjz\" (UID: \"0a1ade6c-7a47-40a7-a38c-b0080894987b\") " pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:01:52.990008 kubelet[2695]: E0709 13:01:52.989991 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.990008 kubelet[2695]: W0709 13:01:52.990004 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.990057 kubelet[2695]: E0709 13:01:52.990018 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.990057 kubelet[2695]: I0709 13:01:52.990035 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a1ade6c-7a47-40a7-a38c-b0080894987b-socket-dir\") pod \"csi-node-driver-x4tjz\" (UID: \"0a1ade6c-7a47-40a7-a38c-b0080894987b\") " pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:01:52.990257 kubelet[2695]: E0709 13:01:52.990239 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.990257 kubelet[2695]: W0709 13:01:52.990251 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.990316 kubelet[2695]: E0709 13:01:52.990266 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.990316 kubelet[2695]: I0709 13:01:52.990279 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a1ade6c-7a47-40a7-a38c-b0080894987b-kubelet-dir\") pod \"csi-node-driver-x4tjz\" (UID: \"0a1ade6c-7a47-40a7-a38c-b0080894987b\") " pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:01:52.990598 kubelet[2695]: E0709 13:01:52.990563 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.990598 kubelet[2695]: W0709 13:01:52.990587 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.990655 kubelet[2695]: E0709 13:01:52.990624 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.990848 kubelet[2695]: E0709 13:01:52.990824 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.990848 kubelet[2695]: W0709 13:01:52.990837 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.990896 kubelet[2695]: E0709 13:01:52.990849 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.991088 kubelet[2695]: E0709 13:01:52.991068 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.991088 kubelet[2695]: W0709 13:01:52.991083 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.991141 kubelet[2695]: E0709 13:01:52.991099 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.991427 kubelet[2695]: E0709 13:01:52.991398 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.991427 kubelet[2695]: W0709 13:01:52.991412 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.991427 kubelet[2695]: E0709 13:01:52.991430 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.991708 kubelet[2695]: E0709 13:01:52.991699 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.991708 kubelet[2695]: W0709 13:01:52.991708 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.991757 kubelet[2695]: E0709 13:01:52.991735 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.991757 kubelet[2695]: I0709 13:01:52.991750 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0a1ade6c-7a47-40a7-a38c-b0080894987b-varrun\") pod \"csi-node-driver-x4tjz\" (UID: \"0a1ade6c-7a47-40a7-a38c-b0080894987b\") " pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:01:52.992033 kubelet[2695]: E0709 13:01:52.992003 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.992033 kubelet[2695]: W0709 13:01:52.992020 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.992097 kubelet[2695]: E0709 13:01:52.992087 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.992123 kubelet[2695]: I0709 13:01:52.992108 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a1ade6c-7a47-40a7-a38c-b0080894987b-registration-dir\") pod \"csi-node-driver-x4tjz\" (UID: \"0a1ade6c-7a47-40a7-a38c-b0080894987b\") " pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:01:52.992455 kubelet[2695]: E0709 13:01:52.992436 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.992455 kubelet[2695]: W0709 13:01:52.992449 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.992601 kubelet[2695]: E0709 13:01:52.992575 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.992859 kubelet[2695]: E0709 13:01:52.992829 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.992859 kubelet[2695]: W0709 13:01:52.992843 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.992923 kubelet[2695]: E0709 13:01:52.992862 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.993137 kubelet[2695]: E0709 13:01:52.993112 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.993137 kubelet[2695]: W0709 13:01:52.993125 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.993188 kubelet[2695]: E0709 13:01:52.993145 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.993418 kubelet[2695]: E0709 13:01:52.993401 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.993418 kubelet[2695]: W0709 13:01:52.993414 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.993477 kubelet[2695]: E0709 13:01:52.993445 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.993715 kubelet[2695]: E0709 13:01:52.993690 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.993715 kubelet[2695]: W0709 13:01:52.993704 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.993715 kubelet[2695]: E0709 13:01:52.993713 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:52.993970 kubelet[2695]: E0709 13:01:52.993942 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:52.993970 kubelet[2695]: W0709 13:01:52.993964 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:52.994027 kubelet[2695]: E0709 13:01:52.993975 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.014506 systemd[1]: Started cri-containerd-56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3.scope - libcontainer container 56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3. Jul 9 13:01:53.042133 containerd[1563]: time="2025-07-09T13:01:53.042016288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nsnl6,Uid:f3bd0e9d-a3b7-4b79-a977-f2eedca5af7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3\"" Jul 9 13:01:53.093064 kubelet[2695]: E0709 13:01:53.093009 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.093064 kubelet[2695]: W0709 13:01:53.093033 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.093064 kubelet[2695]: E0709 13:01:53.093054 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.093328 kubelet[2695]: E0709 13:01:53.093258 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.093328 kubelet[2695]: W0709 13:01:53.093267 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.093328 kubelet[2695]: E0709 13:01:53.093283 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.093595 kubelet[2695]: E0709 13:01:53.093559 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.093595 kubelet[2695]: W0709 13:01:53.093590 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.093693 kubelet[2695]: E0709 13:01:53.093622 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.093958 kubelet[2695]: E0709 13:01:53.093938 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.093958 kubelet[2695]: W0709 13:01:53.093950 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.094032 kubelet[2695]: E0709 13:01:53.093964 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.094209 kubelet[2695]: E0709 13:01:53.094189 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.094209 kubelet[2695]: W0709 13:01:53.094203 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.094302 kubelet[2695]: E0709 13:01:53.094223 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.094450 kubelet[2695]: E0709 13:01:53.094429 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.094450 kubelet[2695]: W0709 13:01:53.094442 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.094540 kubelet[2695]: E0709 13:01:53.094457 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.094711 kubelet[2695]: E0709 13:01:53.094666 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.094711 kubelet[2695]: W0709 13:01:53.094689 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.094790 kubelet[2695]: E0709 13:01:53.094761 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.094904 kubelet[2695]: E0709 13:01:53.094860 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.094904 kubelet[2695]: W0709 13:01:53.094874 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.094973 kubelet[2695]: E0709 13:01:53.094950 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.095088 kubelet[2695]: E0709 13:01:53.095060 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.095088 kubelet[2695]: W0709 13:01:53.095072 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.095275 kubelet[2695]: E0709 13:01:53.095132 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.095275 kubelet[2695]: E0709 13:01:53.095257 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.095275 kubelet[2695]: W0709 13:01:53.095266 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.095339 kubelet[2695]: E0709 13:01:53.095296 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.095604 kubelet[2695]: E0709 13:01:53.095465 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.095604 kubelet[2695]: W0709 13:01:53.095480 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.095604 kubelet[2695]: E0709 13:01:53.095522 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.095772 kubelet[2695]: E0709 13:01:53.095749 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.095772 kubelet[2695]: W0709 13:01:53.095761 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.095883 kubelet[2695]: E0709 13:01:53.095779 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.095972 kubelet[2695]: E0709 13:01:53.095954 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.095972 kubelet[2695]: W0709 13:01:53.095965 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.096107 kubelet[2695]: E0709 13:01:53.095988 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.096182 kubelet[2695]: E0709 13:01:53.096142 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.096182 kubelet[2695]: W0709 13:01:53.096153 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.096182 kubelet[2695]: E0709 13:01:53.096167 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.096422 kubelet[2695]: E0709 13:01:53.096401 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.096422 kubelet[2695]: W0709 13:01:53.096416 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.096499 kubelet[2695]: E0709 13:01:53.096449 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.096617 kubelet[2695]: E0709 13:01:53.096598 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.096617 kubelet[2695]: W0709 13:01:53.096609 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.096693 kubelet[2695]: E0709 13:01:53.096636 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.096789 kubelet[2695]: E0709 13:01:53.096771 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.096789 kubelet[2695]: W0709 13:01:53.096782 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.096858 kubelet[2695]: E0709 13:01:53.096821 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.096963 kubelet[2695]: E0709 13:01:53.096946 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.096963 kubelet[2695]: W0709 13:01:53.096956 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.097024 kubelet[2695]: E0709 13:01:53.096987 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.097137 kubelet[2695]: E0709 13:01:53.097119 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.097137 kubelet[2695]: W0709 13:01:53.097130 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.097212 kubelet[2695]: E0709 13:01:53.097148 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.097450 kubelet[2695]: E0709 13:01:53.097415 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.097450 kubelet[2695]: W0709 13:01:53.097435 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.097549 kubelet[2695]: E0709 13:01:53.097459 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.097667 kubelet[2695]: E0709 13:01:53.097647 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.097667 kubelet[2695]: W0709 13:01:53.097658 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.097741 kubelet[2695]: E0709 13:01:53.097673 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.097850 kubelet[2695]: E0709 13:01:53.097832 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.097850 kubelet[2695]: W0709 13:01:53.097843 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.097910 kubelet[2695]: E0709 13:01:53.097855 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.098166 kubelet[2695]: E0709 13:01:53.098132 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.098166 kubelet[2695]: W0709 13:01:53.098148 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.098166 kubelet[2695]: E0709 13:01:53.098165 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.098386 kubelet[2695]: E0709 13:01:53.098352 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.098386 kubelet[2695]: W0709 13:01:53.098363 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.098458 kubelet[2695]: E0709 13:01:53.098390 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.098775 kubelet[2695]: E0709 13:01:53.098752 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.098775 kubelet[2695]: W0709 13:01:53.098765 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.098775 kubelet[2695]: E0709 13:01:53.098776 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:53.106325 kubelet[2695]: E0709 13:01:53.106272 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:01:53.106325 kubelet[2695]: W0709 13:01:53.106286 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:01:53.106325 kubelet[2695]: E0709 13:01:53.106299 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:01:55.094043 kubelet[2695]: E0709 13:01:55.093939 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:01:57.093458 kubelet[2695]: E0709 13:01:57.093343 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:01:59.094250 kubelet[2695]: E0709 13:01:59.094180 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:01.093955 kubelet[2695]: E0709 13:02:01.093899 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:03.094180 kubelet[2695]: E0709 13:02:03.094036 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:05.093470 kubelet[2695]: E0709 13:02:05.093406 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:07.094025 kubelet[2695]: E0709 13:02:07.093950 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:08.695112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581512430.mount: Deactivated successfully. Jul 9 13:02:09.093700 kubelet[2695]: E0709 13:02:09.093615 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:11.093996 kubelet[2695]: E0709 13:02:11.093918 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:13.094931 kubelet[2695]: E0709 13:02:13.094489 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:13.308991 containerd[1563]: time="2025-07-09T13:02:13.308890143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:13.310055 containerd[1563]: time="2025-07-09T13:02:13.310020629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 9 13:02:13.325881 containerd[1563]: time="2025-07-09T13:02:13.325803936Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:13.328779 containerd[1563]: time="2025-07-09T13:02:13.328719079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:13.329438 containerd[1563]: time="2025-07-09T13:02:13.329366136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 20.584125229s" Jul 9 13:02:13.329501 containerd[1563]: time="2025-07-09T13:02:13.329441868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 9 13:02:13.330680 containerd[1563]: time="2025-07-09T13:02:13.330641494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 13:02:13.342819 containerd[1563]: time="2025-07-09T13:02:13.342767862Z" level=info msg="CreateContainer within sandbox \"afd5e482e28b600fd62bb4d67932d90d12bb995c24d784a0dc3ec2144cd4de08\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 9 13:02:13.352544 containerd[1563]: time="2025-07-09T13:02:13.352388317Z" level=info msg="Container 0fbe178877415415ad5d117640ec2e0b596ea91f9e5d89939904f0a2ed763870: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:02:13.366020 containerd[1563]: time="2025-07-09T13:02:13.365954512Z" level=info msg="CreateContainer within sandbox \"afd5e482e28b600fd62bb4d67932d90d12bb995c24d784a0dc3ec2144cd4de08\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0fbe178877415415ad5d117640ec2e0b596ea91f9e5d89939904f0a2ed763870\"" Jul 9 13:02:13.366772 containerd[1563]: time="2025-07-09T13:02:13.366653407Z" level=info msg="StartContainer for \"0fbe178877415415ad5d117640ec2e0b596ea91f9e5d89939904f0a2ed763870\"" Jul 9 13:02:13.368055 containerd[1563]: time="2025-07-09T13:02:13.368011611Z" level=info msg="connecting to shim 0fbe178877415415ad5d117640ec2e0b596ea91f9e5d89939904f0a2ed763870" address="unix:///run/containerd/s/c6a0bd7ea9d4c0d2fabdf091ac8e767f44820b28fe252b9fb772d7a9e4c6702c" protocol=ttrpc version=3 Jul 9 13:02:13.397569 systemd[1]: Started cri-containerd-0fbe178877415415ad5d117640ec2e0b596ea91f9e5d89939904f0a2ed763870.scope - libcontainer container 0fbe178877415415ad5d117640ec2e0b596ea91f9e5d89939904f0a2ed763870. Jul 9 13:02:13.474444 containerd[1563]: time="2025-07-09T13:02:13.474352331Z" level=info msg="StartContainer for \"0fbe178877415415ad5d117640ec2e0b596ea91f9e5d89939904f0a2ed763870\" returns successfully" Jul 9 13:02:14.197159 kubelet[2695]: E0709 13:02:14.197102 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:14.241403 kubelet[2695]: E0709 13:02:14.241325 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.241651 kubelet[2695]: W0709 13:02:14.241513 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.241651 kubelet[2695]: E0709 13:02:14.241554 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.242537 kubelet[2695]: E0709 13:02:14.242455 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.242537 kubelet[2695]: W0709 13:02:14.242475 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.242537 kubelet[2695]: E0709 13:02:14.242490 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.243170 kubelet[2695]: E0709 13:02:14.242701 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.243170 kubelet[2695]: W0709 13:02:14.242719 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.243170 kubelet[2695]: E0709 13:02:14.242730 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.243170 kubelet[2695]: E0709 13:02:14.242939 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.243170 kubelet[2695]: W0709 13:02:14.242951 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.243170 kubelet[2695]: E0709 13:02:14.242962 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.243420 kubelet[2695]: E0709 13:02:14.243274 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.243420 kubelet[2695]: W0709 13:02:14.243286 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.243420 kubelet[2695]: E0709 13:02:14.243298 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.243620 kubelet[2695]: E0709 13:02:14.243542 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.243620 kubelet[2695]: W0709 13:02:14.243595 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.243620 kubelet[2695]: E0709 13:02:14.243610 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.243891 kubelet[2695]: E0709 13:02:14.243857 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.243891 kubelet[2695]: W0709 13:02:14.243878 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.243972 kubelet[2695]: E0709 13:02:14.243905 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.244396 kubelet[2695]: E0709 13:02:14.244210 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.244396 kubelet[2695]: W0709 13:02:14.244228 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.244396 kubelet[2695]: E0709 13:02:14.244240 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.244879 kubelet[2695]: E0709 13:02:14.244853 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.244879 kubelet[2695]: W0709 13:02:14.244874 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.245035 kubelet[2695]: E0709 13:02:14.245002 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.245656 kubelet[2695]: E0709 13:02:14.245626 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.245656 kubelet[2695]: W0709 13:02:14.245645 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.245656 kubelet[2695]: E0709 13:02:14.245657 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.246544 kubelet[2695]: E0709 13:02:14.246517 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.246544 kubelet[2695]: W0709 13:02:14.246533 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.246544 kubelet[2695]: E0709 13:02:14.246546 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.246892 kubelet[2695]: E0709 13:02:14.246844 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.246892 kubelet[2695]: W0709 13:02:14.246879 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.247121 kubelet[2695]: E0709 13:02:14.246917 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.247284 kubelet[2695]: E0709 13:02:14.247264 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.247284 kubelet[2695]: W0709 13:02:14.247279 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.247363 kubelet[2695]: E0709 13:02:14.247290 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.247558 kubelet[2695]: E0709 13:02:14.247535 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.247558 kubelet[2695]: W0709 13:02:14.247549 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.247558 kubelet[2695]: E0709 13:02:14.247560 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.247822 kubelet[2695]: E0709 13:02:14.247787 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.247822 kubelet[2695]: W0709 13:02:14.247802 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.247822 kubelet[2695]: E0709 13:02:14.247812 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.248174 kubelet[2695]: E0709 13:02:14.248151 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.248174 kubelet[2695]: W0709 13:02:14.248166 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.248261 kubelet[2695]: E0709 13:02:14.248178 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.248480 kubelet[2695]: E0709 13:02:14.248457 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.248480 kubelet[2695]: W0709 13:02:14.248475 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.248571 kubelet[2695]: E0709 13:02:14.248512 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.248855 kubelet[2695]: E0709 13:02:14.248808 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.248855 kubelet[2695]: W0709 13:02:14.248829 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.248954 kubelet[2695]: E0709 13:02:14.248859 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.249367 kubelet[2695]: E0709 13:02:14.249271 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.249367 kubelet[2695]: W0709 13:02:14.249286 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.249367 kubelet[2695]: E0709 13:02:14.249306 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.250077 kubelet[2695]: E0709 13:02:14.250041 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.250077 kubelet[2695]: W0709 13:02:14.250061 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.250302 kubelet[2695]: E0709 13:02:14.250273 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.250930 kubelet[2695]: E0709 13:02:14.250902 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.250930 kubelet[2695]: W0709 13:02:14.250920 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.251234 kubelet[2695]: E0709 13:02:14.251145 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.251724 kubelet[2695]: E0709 13:02:14.251692 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.251724 kubelet[2695]: W0709 13:02:14.251707 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.251941 kubelet[2695]: E0709 13:02:14.251831 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.252383 kubelet[2695]: E0709 13:02:14.252256 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.252383 kubelet[2695]: W0709 13:02:14.252272 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.252812 kubelet[2695]: E0709 13:02:14.252479 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.253171 kubelet[2695]: E0709 13:02:14.253151 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.253171 kubelet[2695]: W0709 13:02:14.253168 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.253500 kubelet[2695]: E0709 13:02:14.253470 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.253829 kubelet[2695]: E0709 13:02:14.253810 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.253829 kubelet[2695]: W0709 13:02:14.253824 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.253957 kubelet[2695]: E0709 13:02:14.253932 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.254189 kubelet[2695]: E0709 13:02:14.254171 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.254189 kubelet[2695]: W0709 13:02:14.254184 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.254305 kubelet[2695]: E0709 13:02:14.254259 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.254472 kubelet[2695]: E0709 13:02:14.254454 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.254472 kubelet[2695]: W0709 13:02:14.254468 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.254571 kubelet[2695]: E0709 13:02:14.254486 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.254836 kubelet[2695]: E0709 13:02:14.254795 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.254836 kubelet[2695]: W0709 13:02:14.254821 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.254919 kubelet[2695]: E0709 13:02:14.254851 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.255078 kubelet[2695]: E0709 13:02:14.255050 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.255078 kubelet[2695]: W0709 13:02:14.255063 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.255170 kubelet[2695]: E0709 13:02:14.255079 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.255354 kubelet[2695]: E0709 13:02:14.255332 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.255354 kubelet[2695]: W0709 13:02:14.255348 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.255464 kubelet[2695]: E0709 13:02:14.255367 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.255735 kubelet[2695]: E0709 13:02:14.255714 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.255735 kubelet[2695]: W0709 13:02:14.255729 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.255819 kubelet[2695]: E0709 13:02:14.255747 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.256033 kubelet[2695]: E0709 13:02:14.256014 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.256033 kubelet[2695]: W0709 13:02:14.256028 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.256121 kubelet[2695]: E0709 13:02:14.256040 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.256326 kubelet[2695]: E0709 13:02:14.256305 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:02:14.256326 kubelet[2695]: W0709 13:02:14.256321 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:02:14.256428 kubelet[2695]: E0709 13:02:14.256333 2695 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:02:14.459839 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:53494.service - OpenSSH per-connection server daemon (10.0.0.1:53494). Jul 9 13:02:14.523247 sshd[3372]: Accepted publickey for core from 10.0.0.1 port 53494 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:14.525220 sshd-session[3372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:14.530123 systemd-logind[1538]: New session 8 of user core. Jul 9 13:02:14.539544 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 13:02:14.668078 sshd[3375]: Connection closed by 10.0.0.1 port 53494 Jul 9 13:02:14.668433 sshd-session[3372]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:14.673691 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:53494.service: Deactivated successfully. Jul 9 13:02:14.676276 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 13:02:14.677155 systemd-logind[1538]: Session 8 logged out. Waiting for processes to exit. Jul 9 13:02:14.678432 systemd-logind[1538]: Removed session 8. Jul 9 13:02:14.915998 containerd[1563]: time="2025-07-09T13:02:14.915931857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:14.916811 containerd[1563]: time="2025-07-09T13:02:14.916743883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 9 13:02:14.918082 containerd[1563]: time="2025-07-09T13:02:14.918049899Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:14.920111 containerd[1563]: time="2025-07-09T13:02:14.920073985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:14.920660 containerd[1563]: time="2025-07-09T13:02:14.920598482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.589917343s" Jul 9 13:02:14.920660 containerd[1563]: time="2025-07-09T13:02:14.920645180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 9 13:02:14.923075 containerd[1563]: time="2025-07-09T13:02:14.923028733Z" level=info msg="CreateContainer within sandbox \"56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 13:02:14.930290 containerd[1563]: time="2025-07-09T13:02:14.930252287Z" level=info msg="Container 19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:02:14.939851 containerd[1563]: time="2025-07-09T13:02:14.939800994Z" level=info msg="CreateContainer within sandbox \"56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79\"" Jul 9 13:02:14.940435 containerd[1563]: time="2025-07-09T13:02:14.940364272Z" level=info msg="StartContainer for \"19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79\"" Jul 9 13:02:14.941972 containerd[1563]: time="2025-07-09T13:02:14.941889631Z" level=info msg="connecting to shim 19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79" address="unix:///run/containerd/s/43e8785a741c8350c8666c13b3bc4d8d7470042445d71431d392ddc167f83cec" protocol=ttrpc version=3 Jul 9 13:02:14.964520 systemd[1]: Started cri-containerd-19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79.scope - libcontainer container 19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79. Jul 9 13:02:15.035810 systemd[1]: cri-containerd-19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79.scope: Deactivated successfully. Jul 9 13:02:15.039092 containerd[1563]: time="2025-07-09T13:02:15.039044063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79\" id:\"19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79\" pid:3427 exited_at:{seconds:1752066135 nanos:38563279}" Jul 9 13:02:15.048216 containerd[1563]: time="2025-07-09T13:02:15.048156367Z" level=info msg="received exit event container_id:\"19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79\" id:\"19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79\" pid:3427 exited_at:{seconds:1752066135 nanos:38563279}" Jul 9 13:02:15.049985 containerd[1563]: time="2025-07-09T13:02:15.049936404Z" level=info msg="StartContainer for \"19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79\" returns successfully" Jul 9 13:02:15.074687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19f7bc844c909c854a1efb51c03a3b235caeb594ff54d11b85a684d76ee31e79-rootfs.mount: Deactivated successfully. Jul 9 13:02:15.093868 kubelet[2695]: E0709 13:02:15.093775 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:15.201246 kubelet[2695]: I0709 13:02:15.201127 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 13:02:15.202212 kubelet[2695]: E0709 13:02:15.202196 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:15.271038 kubelet[2695]: I0709 13:02:15.270598 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59b56bd8db-qvtl6" podStartSLOduration=2.684854557 podStartE2EDuration="23.270580362s" podCreationTimestamp="2025-07-09 13:01:52 +0000 UTC" firstStartedPulling="2025-07-09 13:01:52.744749426 +0000 UTC m=+17.772703209" lastFinishedPulling="2025-07-09 13:02:13.330475231 +0000 UTC m=+38.358429014" observedRunningTime="2025-07-09 13:02:14.207693175 +0000 UTC m=+39.235646958" watchObservedRunningTime="2025-07-09 13:02:15.270580362 +0000 UTC m=+40.298534135" Jul 9 13:02:16.204945 kubelet[2695]: E0709 13:02:16.204906 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:16.205975 containerd[1563]: time="2025-07-09T13:02:16.205941421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 13:02:17.094252 kubelet[2695]: E0709 13:02:17.094166 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:17.206287 kubelet[2695]: E0709 13:02:17.206251 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:19.093238 kubelet[2695]: E0709 13:02:19.093178 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:19.685596 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:37204.service - OpenSSH per-connection server daemon (10.0.0.1:37204). Jul 9 13:02:19.776733 sshd[3470]: Accepted publickey for core from 10.0.0.1 port 37204 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:19.778523 sshd-session[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:19.782823 systemd-logind[1538]: New session 9 of user core. Jul 9 13:02:19.792505 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 13:02:19.899306 sshd[3473]: Connection closed by 10.0.0.1 port 37204 Jul 9 13:02:19.899637 sshd-session[3470]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:19.903549 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:37204.service: Deactivated successfully. Jul 9 13:02:19.905623 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 13:02:19.906290 systemd-logind[1538]: Session 9 logged out. Waiting for processes to exit. Jul 9 13:02:19.907322 systemd-logind[1538]: Removed session 9. Jul 9 13:02:21.093448 kubelet[2695]: E0709 13:02:21.093340 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:23.093731 kubelet[2695]: E0709 13:02:23.093677 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:23.669301 containerd[1563]: time="2025-07-09T13:02:23.669226626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:23.670352 containerd[1563]: time="2025-07-09T13:02:23.670331712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 9 13:02:23.671464 containerd[1563]: time="2025-07-09T13:02:23.671434574Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:23.673599 containerd[1563]: time="2025-07-09T13:02:23.673554426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:23.674278 containerd[1563]: time="2025-07-09T13:02:23.674251025Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 7.468263326s" Jul 9 13:02:23.674337 containerd[1563]: time="2025-07-09T13:02:23.674279148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 9 13:02:23.676130 containerd[1563]: time="2025-07-09T13:02:23.676080562Z" level=info msg="CreateContainer within sandbox \"56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 13:02:23.684962 containerd[1563]: time="2025-07-09T13:02:23.683839898Z" level=info msg="Container 4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:02:23.698230 containerd[1563]: time="2025-07-09T13:02:23.698169017Z" level=info msg="CreateContainer within sandbox \"56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500\"" Jul 9 13:02:23.698870 containerd[1563]: time="2025-07-09T13:02:23.698829688Z" level=info msg="StartContainer for \"4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500\"" Jul 9 13:02:23.700702 containerd[1563]: time="2025-07-09T13:02:23.700673070Z" level=info msg="connecting to shim 4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500" address="unix:///run/containerd/s/43e8785a741c8350c8666c13b3bc4d8d7470042445d71431d392ddc167f83cec" protocol=ttrpc version=3 Jul 9 13:02:23.726578 systemd[1]: Started cri-containerd-4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500.scope - libcontainer container 4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500. Jul 9 13:02:23.769857 containerd[1563]: time="2025-07-09T13:02:23.769803527Z" level=info msg="StartContainer for \"4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500\" returns successfully" Jul 9 13:02:24.791024 containerd[1563]: time="2025-07-09T13:02:24.790949980Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 13:02:24.794322 systemd[1]: cri-containerd-4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500.scope: Deactivated successfully. Jul 9 13:02:24.794763 systemd[1]: cri-containerd-4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500.scope: Consumed 678ms CPU time, 180.6M memory peak, 2M read from disk, 171.2M written to disk. Jul 9 13:02:24.796489 containerd[1563]: time="2025-07-09T13:02:24.796436997Z" level=info msg="received exit event container_id:\"4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500\" id:\"4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500\" pid:3507 exited_at:{seconds:1752066144 nanos:796210632}" Jul 9 13:02:24.796644 containerd[1563]: time="2025-07-09T13:02:24.796509182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500\" id:\"4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500\" pid:3507 exited_at:{seconds:1752066144 nanos:796210632}" Jul 9 13:02:24.820039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4415335b5f0eed1df6b48e632123aa881532be4b4811cb26691bf3db1363c500-rootfs.mount: Deactivated successfully. Jul 9 13:02:24.831986 kubelet[2695]: I0709 13:02:24.831889 2695 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 9 13:02:24.925693 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:37208.service - OpenSSH per-connection server daemon (10.0.0.1:37208). Jul 9 13:02:25.025620 systemd[1]: Created slice kubepods-burstable-pod6294eb09_ccd2_414a_90bb_afd069984c58.slice - libcontainer container kubepods-burstable-pod6294eb09_ccd2_414a_90bb_afd069984c58.slice. Jul 9 13:02:25.027158 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 37208 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:25.029338 sshd-session[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:25.031645 systemd[1]: Created slice kubepods-besteffort-pod83f09b99_c580_4fba_9a5b_8dcbe9457bd1.slice - libcontainer container kubepods-besteffort-pod83f09b99_c580_4fba_9a5b_8dcbe9457bd1.slice. Jul 9 13:02:25.035208 systemd-logind[1538]: New session 10 of user core. Jul 9 13:02:25.045581 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 13:02:25.074904 systemd[1]: Created slice kubepods-burstable-pod30510e32_f8e3_41cd_b70c_1ce6900a24ee.slice - libcontainer container kubepods-burstable-pod30510e32_f8e3_41cd_b70c_1ce6900a24ee.slice. Jul 9 13:02:25.084281 systemd[1]: Created slice kubepods-besteffort-podbe0c468f_8558_4c7f_9123_1d550c90de18.slice - libcontainer container kubepods-besteffort-podbe0c468f_8558_4c7f_9123_1d550c90de18.slice. Jul 9 13:02:25.089306 systemd[1]: Created slice kubepods-besteffort-podfedd9e0a_54b7_42a0_9ec1_72f6e99e97c4.slice - libcontainer container kubepods-besteffort-podfedd9e0a_54b7_42a0_9ec1_72f6e99e97c4.slice. Jul 9 13:02:25.094928 systemd[1]: Created slice kubepods-besteffort-podeb6a2504_b349_4acd_ada6_a46fec76361c.slice - libcontainer container kubepods-besteffort-podeb6a2504_b349_4acd_ada6_a46fec76361c.slice. Jul 9 13:02:25.101716 systemd[1]: Created slice kubepods-besteffort-pod6ae36fb0_52ed_4533_b91d_fab218944275.slice - libcontainer container kubepods-besteffort-pod6ae36fb0_52ed_4533_b91d_fab218944275.slice. Jul 9 13:02:25.107349 systemd[1]: Created slice kubepods-besteffort-pod0a1ade6c_7a47_40a7_a38c_b0080894987b.slice - libcontainer container kubepods-besteffort-pod0a1ade6c_7a47_40a7_a38c_b0080894987b.slice. Jul 9 13:02:25.109850 containerd[1563]: time="2025-07-09T13:02:25.109805647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4tjz,Uid:0a1ade6c-7a47-40a7-a38c-b0080894987b,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:25.126290 kubelet[2695]: I0709 13:02:25.126239 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/83f09b99-c580-4fba-9a5b-8dcbe9457bd1-calico-apiserver-certs\") pod \"calico-apiserver-68df4b8f94-glcsr\" (UID: \"83f09b99-c580-4fba-9a5b-8dcbe9457bd1\") " pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" Jul 9 13:02:25.126290 kubelet[2695]: I0709 13:02:25.126281 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lpcl\" (UniqueName: \"kubernetes.io/projected/83f09b99-c580-4fba-9a5b-8dcbe9457bd1-kube-api-access-6lpcl\") pod \"calico-apiserver-68df4b8f94-glcsr\" (UID: \"83f09b99-c580-4fba-9a5b-8dcbe9457bd1\") " pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" Jul 9 13:02:25.126410 kubelet[2695]: I0709 13:02:25.126298 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6294eb09-ccd2-414a-90bb-afd069984c58-config-volume\") pod \"coredns-7c65d6cfc9-mw6rj\" (UID: \"6294eb09-ccd2-414a-90bb-afd069984c58\") " pod="kube-system/coredns-7c65d6cfc9-mw6rj" Jul 9 13:02:25.126410 kubelet[2695]: I0709 13:02:25.126314 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcl4b\" (UniqueName: \"kubernetes.io/projected/eb6a2504-b349-4acd-ada6-a46fec76361c-kube-api-access-gcl4b\") pod \"calico-apiserver-68df4b8f94-d7grl\" (UID: \"eb6a2504-b349-4acd-ada6-a46fec76361c\") " pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" Jul 9 13:02:25.126410 kubelet[2695]: I0709 13:02:25.126331 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be0c468f-8558-4c7f-9123-1d550c90de18-tigera-ca-bundle\") pod \"calico-kube-controllers-766fbd8c89-f5gnv\" (UID: \"be0c468f-8558-4c7f-9123-1d550c90de18\") " pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" Jul 9 13:02:25.126410 kubelet[2695]: I0709 13:02:25.126346 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwkgt\" (UniqueName: \"kubernetes.io/projected/be0c468f-8558-4c7f-9123-1d550c90de18-kube-api-access-xwkgt\") pod \"calico-kube-controllers-766fbd8c89-f5gnv\" (UID: \"be0c468f-8558-4c7f-9123-1d550c90de18\") " pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" Jul 9 13:02:25.126410 kubelet[2695]: I0709 13:02:25.126361 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb6a2504-b349-4acd-ada6-a46fec76361c-calico-apiserver-certs\") pod \"calico-apiserver-68df4b8f94-d7grl\" (UID: \"eb6a2504-b349-4acd-ada6-a46fec76361c\") " pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" Jul 9 13:02:25.126534 kubelet[2695]: I0709 13:02:25.126392 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gvg\" (UniqueName: \"kubernetes.io/projected/6294eb09-ccd2-414a-90bb-afd069984c58-kube-api-access-j9gvg\") pod \"coredns-7c65d6cfc9-mw6rj\" (UID: \"6294eb09-ccd2-414a-90bb-afd069984c58\") " pod="kube-system/coredns-7c65d6cfc9-mw6rj" Jul 9 13:02:25.226806 kubelet[2695]: I0709 13:02:25.226710 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-whisker-ca-bundle\") pod \"whisker-6958998458-nhmmk\" (UID: \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\") " pod="calico-system/whisker-6958998458-nhmmk" Jul 9 13:02:25.226806 kubelet[2695]: I0709 13:02:25.226747 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjksn\" (UniqueName: \"kubernetes.io/projected/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-kube-api-access-rjksn\") pod \"whisker-6958998458-nhmmk\" (UID: \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\") " pod="calico-system/whisker-6958998458-nhmmk" Jul 9 13:02:25.226806 kubelet[2695]: I0709 13:02:25.226773 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-whisker-backend-key-pair\") pod \"whisker-6958998458-nhmmk\" (UID: \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\") " pod="calico-system/whisker-6958998458-nhmmk" Jul 9 13:02:25.226806 kubelet[2695]: I0709 13:02:25.226792 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbhgc\" (UniqueName: \"kubernetes.io/projected/30510e32-f8e3-41cd-b70c-1ce6900a24ee-kube-api-access-zbhgc\") pod \"coredns-7c65d6cfc9-gw8lq\" (UID: \"30510e32-f8e3-41cd-b70c-1ce6900a24ee\") " pod="kube-system/coredns-7c65d6cfc9-gw8lq" Jul 9 13:02:25.227034 kubelet[2695]: I0709 13:02:25.226936 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qclg5\" (UniqueName: \"kubernetes.io/projected/6ae36fb0-52ed-4533-b91d-fab218944275-kube-api-access-qclg5\") pod \"goldmane-58fd7646b9-6p7m6\" (UID: \"6ae36fb0-52ed-4533-b91d-fab218944275\") " pod="calico-system/goldmane-58fd7646b9-6p7m6" Jul 9 13:02:25.227315 kubelet[2695]: I0709 13:02:25.227056 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30510e32-f8e3-41cd-b70c-1ce6900a24ee-config-volume\") pod \"coredns-7c65d6cfc9-gw8lq\" (UID: \"30510e32-f8e3-41cd-b70c-1ce6900a24ee\") " pod="kube-system/coredns-7c65d6cfc9-gw8lq" Jul 9 13:02:25.227315 kubelet[2695]: I0709 13:02:25.227089 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae36fb0-52ed-4533-b91d-fab218944275-config\") pod \"goldmane-58fd7646b9-6p7m6\" (UID: \"6ae36fb0-52ed-4533-b91d-fab218944275\") " pod="calico-system/goldmane-58fd7646b9-6p7m6" Jul 9 13:02:25.227315 kubelet[2695]: I0709 13:02:25.227107 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6ae36fb0-52ed-4533-b91d-fab218944275-goldmane-key-pair\") pod \"goldmane-58fd7646b9-6p7m6\" (UID: \"6ae36fb0-52ed-4533-b91d-fab218944275\") " pod="calico-system/goldmane-58fd7646b9-6p7m6" Jul 9 13:02:25.228394 kubelet[2695]: I0709 13:02:25.227933 2695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ae36fb0-52ed-4533-b91d-fab218944275-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-6p7m6\" (UID: \"6ae36fb0-52ed-4533-b91d-fab218944275\") " pod="calico-system/goldmane-58fd7646b9-6p7m6" Jul 9 13:02:25.381335 kubelet[2695]: E0709 13:02:25.380195 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:25.381634 containerd[1563]: time="2025-07-09T13:02:25.381576676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gw8lq,Uid:30510e32-f8e3-41cd-b70c-1ce6900a24ee,Namespace:kube-system,Attempt:0,}" Jul 9 13:02:25.387273 containerd[1563]: time="2025-07-09T13:02:25.387247257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766fbd8c89-f5gnv,Uid:be0c468f-8558-4c7f-9123-1d550c90de18,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:25.394188 containerd[1563]: time="2025-07-09T13:02:25.394060825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6958998458-nhmmk,Uid:fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:25.399210 containerd[1563]: time="2025-07-09T13:02:25.399179229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-d7grl,Uid:eb6a2504-b349-4acd-ada6-a46fec76361c,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:02:25.406601 containerd[1563]: time="2025-07-09T13:02:25.406516931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6p7m6,Uid:6ae36fb0-52ed-4533-b91d-fab218944275,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:25.438566 sshd[3542]: Connection closed by 10.0.0.1 port 37208 Jul 9 13:02:25.438851 sshd-session[3539]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:25.451102 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:37208.service: Deactivated successfully. Jul 9 13:02:25.455761 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 13:02:25.460491 systemd-logind[1538]: Session 10 logged out. Waiting for processes to exit. Jul 9 13:02:25.464897 containerd[1563]: time="2025-07-09T13:02:25.464454874Z" level=error msg="Failed to destroy network for sandbox \"710e5d34aa7a07cfe224b787d9b760238ae34d6bf8d85afe178acba6932353d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.464501 systemd-logind[1538]: Removed session 10. Jul 9 13:02:25.472671 containerd[1563]: time="2025-07-09T13:02:25.472610152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4tjz,Uid:0a1ade6c-7a47-40a7-a38c-b0080894987b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"710e5d34aa7a07cfe224b787d9b760238ae34d6bf8d85afe178acba6932353d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.472953 kubelet[2695]: E0709 13:02:25.472898 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710e5d34aa7a07cfe224b787d9b760238ae34d6bf8d85afe178acba6932353d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.473025 kubelet[2695]: E0709 13:02:25.472995 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710e5d34aa7a07cfe224b787d9b760238ae34d6bf8d85afe178acba6932353d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:02:25.473025 kubelet[2695]: E0709 13:02:25.473019 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710e5d34aa7a07cfe224b787d9b760238ae34d6bf8d85afe178acba6932353d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:02:25.473092 kubelet[2695]: E0709 13:02:25.473061 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x4tjz_calico-system(0a1ade6c-7a47-40a7-a38c-b0080894987b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x4tjz_calico-system(0a1ade6c-7a47-40a7-a38c-b0080894987b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"710e5d34aa7a07cfe224b787d9b760238ae34d6bf8d85afe178acba6932353d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:25.543881 containerd[1563]: time="2025-07-09T13:02:25.543802710Z" level=error msg="Failed to destroy network for sandbox \"4ed9d2b5afeaa8bf30749563f3747944e799e4bee2dcbc4e1f3ab808899d99ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.546657 containerd[1563]: time="2025-07-09T13:02:25.546618440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gw8lq,Uid:30510e32-f8e3-41cd-b70c-1ce6900a24ee,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed9d2b5afeaa8bf30749563f3747944e799e4bee2dcbc4e1f3ab808899d99ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.547152 kubelet[2695]: E0709 13:02:25.547101 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed9d2b5afeaa8bf30749563f3747944e799e4bee2dcbc4e1f3ab808899d99ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.547215 kubelet[2695]: E0709 13:02:25.547180 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed9d2b5afeaa8bf30749563f3747944e799e4bee2dcbc4e1f3ab808899d99ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gw8lq" Jul 9 13:02:25.547215 kubelet[2695]: E0709 13:02:25.547202 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed9d2b5afeaa8bf30749563f3747944e799e4bee2dcbc4e1f3ab808899d99ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gw8lq" Jul 9 13:02:25.547299 kubelet[2695]: E0709 13:02:25.547257 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gw8lq_kube-system(30510e32-f8e3-41cd-b70c-1ce6900a24ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gw8lq_kube-system(30510e32-f8e3-41cd-b70c-1ce6900a24ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ed9d2b5afeaa8bf30749563f3747944e799e4bee2dcbc4e1f3ab808899d99ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gw8lq" podUID="30510e32-f8e3-41cd-b70c-1ce6900a24ee" Jul 9 13:02:25.559392 containerd[1563]: time="2025-07-09T13:02:25.559314366Z" level=error msg="Failed to destroy network for sandbox \"0a256275baa48a299e084d41112a71a5f64c2887c228234e60affae8570ba780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.562271 containerd[1563]: time="2025-07-09T13:02:25.562153779Z" level=error msg="Failed to destroy network for sandbox \"555d2a5f89f574d1e6de5379caf6df9b646981a370f0c74dbbe57ce557d42e5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.562555 containerd[1563]: time="2025-07-09T13:02:25.562529124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6958998458-nhmmk,Uid:fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a256275baa48a299e084d41112a71a5f64c2887c228234e60affae8570ba780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.563327 kubelet[2695]: E0709 13:02:25.563195 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a256275baa48a299e084d41112a71a5f64c2887c228234e60affae8570ba780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.563327 kubelet[2695]: E0709 13:02:25.563270 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a256275baa48a299e084d41112a71a5f64c2887c228234e60affae8570ba780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6958998458-nhmmk" Jul 9 13:02:25.563327 kubelet[2695]: E0709 13:02:25.563295 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a256275baa48a299e084d41112a71a5f64c2887c228234e60affae8570ba780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6958998458-nhmmk" Jul 9 13:02:25.563997 kubelet[2695]: E0709 13:02:25.563950 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6958998458-nhmmk_calico-system(fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6958998458-nhmmk_calico-system(fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a256275baa48a299e084d41112a71a5f64c2887c228234e60affae8570ba780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6958998458-nhmmk" podUID="fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4" Jul 9 13:02:25.564170 containerd[1563]: time="2025-07-09T13:02:25.564145359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766fbd8c89-f5gnv,Uid:be0c468f-8558-4c7f-9123-1d550c90de18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"555d2a5f89f574d1e6de5379caf6df9b646981a370f0c74dbbe57ce557d42e5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.565106 kubelet[2695]: E0709 13:02:25.564925 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555d2a5f89f574d1e6de5379caf6df9b646981a370f0c74dbbe57ce557d42e5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.565106 kubelet[2695]: E0709 13:02:25.564993 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555d2a5f89f574d1e6de5379caf6df9b646981a370f0c74dbbe57ce557d42e5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" Jul 9 13:02:25.565106 kubelet[2695]: E0709 13:02:25.565016 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555d2a5f89f574d1e6de5379caf6df9b646981a370f0c74dbbe57ce557d42e5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" Jul 9 13:02:25.565216 kubelet[2695]: E0709 13:02:25.565061 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-766fbd8c89-f5gnv_calico-system(be0c468f-8558-4c7f-9123-1d550c90de18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-766fbd8c89-f5gnv_calico-system(be0c468f-8558-4c7f-9123-1d550c90de18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"555d2a5f89f574d1e6de5379caf6df9b646981a370f0c74dbbe57ce557d42e5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" podUID="be0c468f-8558-4c7f-9123-1d550c90de18" Jul 9 13:02:25.580783 containerd[1563]: time="2025-07-09T13:02:25.580715803Z" level=error msg="Failed to destroy network for sandbox \"efd0bf878393b7cc593ad2fab05b5c710e2be9ac741d4778b8ecb33a62a9800f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.588361 containerd[1563]: time="2025-07-09T13:02:25.588204609Z" level=error msg="Failed to destroy network for sandbox \"9fe4bf3f80e7c970af473359f17e93c050b35007f5164d1aa58d7954f95822bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.590500 containerd[1563]: time="2025-07-09T13:02:25.590463933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-d7grl,Uid:eb6a2504-b349-4acd-ada6-a46fec76361c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"efd0bf878393b7cc593ad2fab05b5c710e2be9ac741d4778b8ecb33a62a9800f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.590814 kubelet[2695]: E0709 13:02:25.590761 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efd0bf878393b7cc593ad2fab05b5c710e2be9ac741d4778b8ecb33a62a9800f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.590881 kubelet[2695]: E0709 13:02:25.590836 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efd0bf878393b7cc593ad2fab05b5c710e2be9ac741d4778b8ecb33a62a9800f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" Jul 9 13:02:25.590881 kubelet[2695]: E0709 13:02:25.590868 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efd0bf878393b7cc593ad2fab05b5c710e2be9ac741d4778b8ecb33a62a9800f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" Jul 9 13:02:25.590949 kubelet[2695]: E0709 13:02:25.590923 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68df4b8f94-d7grl_calico-apiserver(eb6a2504-b349-4acd-ada6-a46fec76361c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68df4b8f94-d7grl_calico-apiserver(eb6a2504-b349-4acd-ada6-a46fec76361c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efd0bf878393b7cc593ad2fab05b5c710e2be9ac741d4778b8ecb33a62a9800f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" podUID="eb6a2504-b349-4acd-ada6-a46fec76361c" Jul 9 13:02:25.599400 containerd[1563]: time="2025-07-09T13:02:25.599325087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6p7m6,Uid:6ae36fb0-52ed-4533-b91d-fab218944275,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe4bf3f80e7c970af473359f17e93c050b35007f5164d1aa58d7954f95822bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.599501 kubelet[2695]: E0709 13:02:25.599463 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe4bf3f80e7c970af473359f17e93c050b35007f5164d1aa58d7954f95822bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.599501 kubelet[2695]: E0709 13:02:25.599490 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe4bf3f80e7c970af473359f17e93c050b35007f5164d1aa58d7954f95822bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6p7m6" Jul 9 13:02:25.599565 kubelet[2695]: E0709 13:02:25.599504 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe4bf3f80e7c970af473359f17e93c050b35007f5164d1aa58d7954f95822bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6p7m6" Jul 9 13:02:25.599565 kubelet[2695]: E0709 13:02:25.599532 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-6p7m6_calico-system(6ae36fb0-52ed-4533-b91d-fab218944275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-6p7m6_calico-system(6ae36fb0-52ed-4533-b91d-fab218944275)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fe4bf3f80e7c970af473359f17e93c050b35007f5164d1aa58d7954f95822bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-6p7m6" podUID="6ae36fb0-52ed-4533-b91d-fab218944275" Jul 9 13:02:25.629905 kubelet[2695]: E0709 13:02:25.629782 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:25.630871 containerd[1563]: time="2025-07-09T13:02:25.630588177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mw6rj,Uid:6294eb09-ccd2-414a-90bb-afd069984c58,Namespace:kube-system,Attempt:0,}" Jul 9 13:02:25.635128 containerd[1563]: time="2025-07-09T13:02:25.633990157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-glcsr,Uid:83f09b99-c580-4fba-9a5b-8dcbe9457bd1,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:02:25.690488 containerd[1563]: time="2025-07-09T13:02:25.690410850Z" level=error msg="Failed to destroy network for sandbox \"bb71160e3f9fb218f1461a445c12c288ffca667703da6352c5c6d44e32326165\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.696059 containerd[1563]: time="2025-07-09T13:02:25.696009176Z" level=error msg="Failed to destroy network for sandbox \"c149c4e6b82698e0a1ebb0a2633a3e3c5fdbcfda2a43d18cd344cfed8b5190ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.790750 containerd[1563]: time="2025-07-09T13:02:25.790670805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mw6rj,Uid:6294eb09-ccd2-414a-90bb-afd069984c58,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb71160e3f9fb218f1461a445c12c288ffca667703da6352c5c6d44e32326165\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.791028 kubelet[2695]: E0709 13:02:25.790933 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb71160e3f9fb218f1461a445c12c288ffca667703da6352c5c6d44e32326165\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.791028 kubelet[2695]: E0709 13:02:25.790999 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb71160e3f9fb218f1461a445c12c288ffca667703da6352c5c6d44e32326165\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mw6rj" Jul 9 13:02:25.791028 kubelet[2695]: E0709 13:02:25.791023 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb71160e3f9fb218f1461a445c12c288ffca667703da6352c5c6d44e32326165\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mw6rj" Jul 9 13:02:25.791113 kubelet[2695]: E0709 13:02:25.791067 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mw6rj_kube-system(6294eb09-ccd2-414a-90bb-afd069984c58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mw6rj_kube-system(6294eb09-ccd2-414a-90bb-afd069984c58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb71160e3f9fb218f1461a445c12c288ffca667703da6352c5c6d44e32326165\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mw6rj" podUID="6294eb09-ccd2-414a-90bb-afd069984c58" Jul 9 13:02:25.809100 containerd[1563]: time="2025-07-09T13:02:25.809029167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-glcsr,Uid:83f09b99-c580-4fba-9a5b-8dcbe9457bd1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c149c4e6b82698e0a1ebb0a2633a3e3c5fdbcfda2a43d18cd344cfed8b5190ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.809561 kubelet[2695]: E0709 13:02:25.809203 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c149c4e6b82698e0a1ebb0a2633a3e3c5fdbcfda2a43d18cd344cfed8b5190ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:25.809561 kubelet[2695]: E0709 13:02:25.809243 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c149c4e6b82698e0a1ebb0a2633a3e3c5fdbcfda2a43d18cd344cfed8b5190ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" Jul 9 13:02:25.809561 kubelet[2695]: E0709 13:02:25.809262 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c149c4e6b82698e0a1ebb0a2633a3e3c5fdbcfda2a43d18cd344cfed8b5190ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" Jul 9 13:02:25.809647 kubelet[2695]: E0709 13:02:25.809313 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68df4b8f94-glcsr_calico-apiserver(83f09b99-c580-4fba-9a5b-8dcbe9457bd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68df4b8f94-glcsr_calico-apiserver(83f09b99-c580-4fba-9a5b-8dcbe9457bd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c149c4e6b82698e0a1ebb0a2633a3e3c5fdbcfda2a43d18cd344cfed8b5190ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" podUID="83f09b99-c580-4fba-9a5b-8dcbe9457bd1" Jul 9 13:02:25.827785 systemd[1]: run-netns-cni\x2d2b3566c9\x2dd049\x2dffaa\x2dece2\x2d22fa7302e5ac.mount: Deactivated successfully. Jul 9 13:02:26.232217 containerd[1563]: time="2025-07-09T13:02:26.232137426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 13:02:30.238166 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:42586.service - OpenSSH per-connection server daemon (10.0.0.1:42586). Jul 9 13:02:30.303951 sshd[3852]: Accepted publickey for core from 10.0.0.1 port 42586 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:30.305212 sshd-session[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:30.309817 systemd-logind[1538]: New session 11 of user core. Jul 9 13:02:30.320491 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 13:02:30.432799 sshd[3855]: Connection closed by 10.0.0.1 port 42586 Jul 9 13:02:30.433196 sshd-session[3852]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:30.437706 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:42586.service: Deactivated successfully. Jul 9 13:02:30.440015 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 13:02:30.440927 systemd-logind[1538]: Session 11 logged out. Waiting for processes to exit. Jul 9 13:02:30.442550 systemd-logind[1538]: Removed session 11. Jul 9 13:02:35.447679 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:42598.service - OpenSSH per-connection server daemon (10.0.0.1:42598). Jul 9 13:02:35.511360 sshd[3871]: Accepted publickey for core from 10.0.0.1 port 42598 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:35.513289 sshd-session[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:35.519352 systemd-logind[1538]: New session 12 of user core. Jul 9 13:02:35.527551 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 13:02:35.644584 sshd[3874]: Connection closed by 10.0.0.1 port 42598 Jul 9 13:02:35.645002 sshd-session[3871]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:35.654361 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:42598.service: Deactivated successfully. Jul 9 13:02:35.656471 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 13:02:35.657257 systemd-logind[1538]: Session 12 logged out. Waiting for processes to exit. Jul 9 13:02:35.660235 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:42606.service - OpenSSH per-connection server daemon (10.0.0.1:42606). Jul 9 13:02:35.660996 systemd-logind[1538]: Removed session 12. Jul 9 13:02:35.720697 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 42606 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:35.723155 sshd-session[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:35.728227 systemd-logind[1538]: New session 13 of user core. Jul 9 13:02:35.735561 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 13:02:35.904699 sshd[3892]: Connection closed by 10.0.0.1 port 42606 Jul 9 13:02:35.905434 sshd-session[3889]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:35.916713 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:42606.service: Deactivated successfully. Jul 9 13:02:35.919446 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 13:02:35.920478 systemd-logind[1538]: Session 13 logged out. Waiting for processes to exit. Jul 9 13:02:35.925332 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:42622.service - OpenSSH per-connection server daemon (10.0.0.1:42622). Jul 9 13:02:35.926915 systemd-logind[1538]: Removed session 13. Jul 9 13:02:35.982576 sshd[3903]: Accepted publickey for core from 10.0.0.1 port 42622 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:35.984415 sshd-session[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:35.990125 systemd-logind[1538]: New session 14 of user core. Jul 9 13:02:35.998661 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 13:02:36.156793 sshd[3906]: Connection closed by 10.0.0.1 port 42622 Jul 9 13:02:36.157236 sshd-session[3903]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:36.163303 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:42622.service: Deactivated successfully. Jul 9 13:02:36.166288 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 13:02:36.167672 systemd-logind[1538]: Session 14 logged out. Waiting for processes to exit. Jul 9 13:02:36.169241 systemd-logind[1538]: Removed session 14. Jul 9 13:02:37.094312 kubelet[2695]: E0709 13:02:37.093815 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:37.094877 containerd[1563]: time="2025-07-09T13:02:37.094661773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6958998458-nhmmk,Uid:fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:37.095126 containerd[1563]: time="2025-07-09T13:02:37.095023782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6p7m6,Uid:6ae36fb0-52ed-4533-b91d-fab218944275,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:37.095154 containerd[1563]: time="2025-07-09T13:02:37.095122407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gw8lq,Uid:30510e32-f8e3-41cd-b70c-1ce6900a24ee,Namespace:kube-system,Attempt:0,}" Jul 9 13:02:37.095323 containerd[1563]: time="2025-07-09T13:02:37.095285784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-d7grl,Uid:eb6a2504-b349-4acd-ada6-a46fec76361c,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:02:37.095473 containerd[1563]: time="2025-07-09T13:02:37.095408084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4tjz,Uid:0a1ade6c-7a47-40a7-a38c-b0080894987b,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:37.524801 containerd[1563]: time="2025-07-09T13:02:37.524743101Z" level=error msg="Failed to destroy network for sandbox \"7710503a352dc4860f800c93464886913a6dc8a7ff9210fcb096191741db7668\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.527703 containerd[1563]: time="2025-07-09T13:02:37.527659756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6p7m6,Uid:6ae36fb0-52ed-4533-b91d-fab218944275,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7710503a352dc4860f800c93464886913a6dc8a7ff9210fcb096191741db7668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.529407 kubelet[2695]: E0709 13:02:37.529100 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7710503a352dc4860f800c93464886913a6dc8a7ff9210fcb096191741db7668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.529407 kubelet[2695]: E0709 13:02:37.529187 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7710503a352dc4860f800c93464886913a6dc8a7ff9210fcb096191741db7668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6p7m6" Jul 9 13:02:37.529407 kubelet[2695]: E0709 13:02:37.529210 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7710503a352dc4860f800c93464886913a6dc8a7ff9210fcb096191741db7668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6p7m6" Jul 9 13:02:37.533023 kubelet[2695]: E0709 13:02:37.532950 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-6p7m6_calico-system(6ae36fb0-52ed-4533-b91d-fab218944275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-6p7m6_calico-system(6ae36fb0-52ed-4533-b91d-fab218944275)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7710503a352dc4860f800c93464886913a6dc8a7ff9210fcb096191741db7668\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-6p7m6" podUID="6ae36fb0-52ed-4533-b91d-fab218944275" Jul 9 13:02:37.550442 containerd[1563]: time="2025-07-09T13:02:37.549763217Z" level=error msg="Failed to destroy network for sandbox \"4933e5481689554dc327cc70dc6a7d44a337772b425b66c1b4ae7df9007e70a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.553449 containerd[1563]: time="2025-07-09T13:02:37.553407607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6958998458-nhmmk,Uid:fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4933e5481689554dc327cc70dc6a7d44a337772b425b66c1b4ae7df9007e70a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.553750 containerd[1563]: time="2025-07-09T13:02:37.553723600Z" level=error msg="Failed to destroy network for sandbox \"aadb9fa3cb9416b9ead6b0ceca5d8e03a810d3b3950f6ebd00a0bc090c405a16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.553855 kubelet[2695]: E0709 13:02:37.553803 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4933e5481689554dc327cc70dc6a7d44a337772b425b66c1b4ae7df9007e70a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.553935 kubelet[2695]: E0709 13:02:37.553882 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4933e5481689554dc327cc70dc6a7d44a337772b425b66c1b4ae7df9007e70a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6958998458-nhmmk" Jul 9 13:02:37.553935 kubelet[2695]: E0709 13:02:37.553904 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4933e5481689554dc327cc70dc6a7d44a337772b425b66c1b4ae7df9007e70a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6958998458-nhmmk" Jul 9 13:02:37.554010 kubelet[2695]: E0709 13:02:37.553944 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6958998458-nhmmk_calico-system(fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6958998458-nhmmk_calico-system(fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4933e5481689554dc327cc70dc6a7d44a337772b425b66c1b4ae7df9007e70a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6958998458-nhmmk" podUID="fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4" Jul 9 13:02:37.555198 containerd[1563]: time="2025-07-09T13:02:37.555172379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gw8lq,Uid:30510e32-f8e3-41cd-b70c-1ce6900a24ee,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadb9fa3cb9416b9ead6b0ceca5d8e03a810d3b3950f6ebd00a0bc090c405a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.556268 kubelet[2695]: E0709 13:02:37.556221 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadb9fa3cb9416b9ead6b0ceca5d8e03a810d3b3950f6ebd00a0bc090c405a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.556268 kubelet[2695]: E0709 13:02:37.556255 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadb9fa3cb9416b9ead6b0ceca5d8e03a810d3b3950f6ebd00a0bc090c405a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gw8lq" Jul 9 13:02:37.556268 kubelet[2695]: E0709 13:02:37.556270 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadb9fa3cb9416b9ead6b0ceca5d8e03a810d3b3950f6ebd00a0bc090c405a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gw8lq" Jul 9 13:02:37.556729 kubelet[2695]: E0709 13:02:37.556292 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gw8lq_kube-system(30510e32-f8e3-41cd-b70c-1ce6900a24ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gw8lq_kube-system(30510e32-f8e3-41cd-b70c-1ce6900a24ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aadb9fa3cb9416b9ead6b0ceca5d8e03a810d3b3950f6ebd00a0bc090c405a16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gw8lq" podUID="30510e32-f8e3-41cd-b70c-1ce6900a24ee" Jul 9 13:02:37.596407 containerd[1563]: time="2025-07-09T13:02:37.596308616Z" level=error msg="Failed to destroy network for sandbox \"966885aede94bfbd0c9749c736f50366913c5ffff29cc2a9857da49fca9cc6d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.598324 containerd[1563]: time="2025-07-09T13:02:37.598250541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4tjz,Uid:0a1ade6c-7a47-40a7-a38c-b0080894987b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"966885aede94bfbd0c9749c736f50366913c5ffff29cc2a9857da49fca9cc6d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.598710 kubelet[2695]: E0709 13:02:37.598531 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966885aede94bfbd0c9749c736f50366913c5ffff29cc2a9857da49fca9cc6d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.598710 kubelet[2695]: E0709 13:02:37.598614 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966885aede94bfbd0c9749c736f50366913c5ffff29cc2a9857da49fca9cc6d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:02:37.598710 kubelet[2695]: E0709 13:02:37.598636 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966885aede94bfbd0c9749c736f50366913c5ffff29cc2a9857da49fca9cc6d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4tjz" Jul 9 13:02:37.598977 kubelet[2695]: E0709 13:02:37.598677 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x4tjz_calico-system(0a1ade6c-7a47-40a7-a38c-b0080894987b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x4tjz_calico-system(0a1ade6c-7a47-40a7-a38c-b0080894987b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"966885aede94bfbd0c9749c736f50366913c5ffff29cc2a9857da49fca9cc6d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x4tjz" podUID="0a1ade6c-7a47-40a7-a38c-b0080894987b" Jul 9 13:02:37.610617 containerd[1563]: time="2025-07-09T13:02:37.610540189Z" level=error msg="Failed to destroy network for sandbox \"ee075736f1510a63b18070b9bbc8416e8b11d29f4028e4210c138546c54499d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.612178 containerd[1563]: time="2025-07-09T13:02:37.612082335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-d7grl,Uid:eb6a2504-b349-4acd-ada6-a46fec76361c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee075736f1510a63b18070b9bbc8416e8b11d29f4028e4210c138546c54499d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.612490 kubelet[2695]: E0709 13:02:37.612436 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee075736f1510a63b18070b9bbc8416e8b11d29f4028e4210c138546c54499d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:37.612490 kubelet[2695]: E0709 13:02:37.612484 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee075736f1510a63b18070b9bbc8416e8b11d29f4028e4210c138546c54499d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" Jul 9 13:02:37.612490 kubelet[2695]: E0709 13:02:37.612503 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee075736f1510a63b18070b9bbc8416e8b11d29f4028e4210c138546c54499d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" Jul 9 13:02:37.612744 kubelet[2695]: E0709 13:02:37.612632 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68df4b8f94-d7grl_calico-apiserver(eb6a2504-b349-4acd-ada6-a46fec76361c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68df4b8f94-d7grl_calico-apiserver(eb6a2504-b349-4acd-ada6-a46fec76361c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee075736f1510a63b18070b9bbc8416e8b11d29f4028e4210c138546c54499d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" podUID="eb6a2504-b349-4acd-ada6-a46fec76361c" Jul 9 13:02:38.094136 kubelet[2695]: E0709 13:02:38.094092 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:38.094669 containerd[1563]: time="2025-07-09T13:02:38.094632687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766fbd8c89-f5gnv,Uid:be0c468f-8558-4c7f-9123-1d550c90de18,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:38.094998 containerd[1563]: time="2025-07-09T13:02:38.094957225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mw6rj,Uid:6294eb09-ccd2-414a-90bb-afd069984c58,Namespace:kube-system,Attempt:0,}" Jul 9 13:02:38.095901 containerd[1563]: time="2025-07-09T13:02:38.095879336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-glcsr,Uid:83f09b99-c580-4fba-9a5b-8dcbe9457bd1,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:02:38.437304 systemd[1]: run-netns-cni\x2d0d1e43fd\x2d034c\x2df03b\x2dcc0b\x2d0ec0459fd0ec.mount: Deactivated successfully. Jul 9 13:02:38.437447 systemd[1]: run-netns-cni\x2d4d8a5958\x2dee25\x2daba0\x2d41e8\x2d66803291875e.mount: Deactivated successfully. Jul 9 13:02:38.437520 systemd[1]: run-netns-cni\x2dd82f4b9e\x2db184\x2d7ca6\x2debd2\x2d2c627c6b2449.mount: Deactivated successfully. Jul 9 13:02:38.437600 systemd[1]: run-netns-cni\x2d64678d25\x2d604f\x2d35e3\x2d39a9\x2d6b16510d5cb1.mount: Deactivated successfully. Jul 9 13:02:38.437669 systemd[1]: run-netns-cni\x2d6ebd9b69\x2dafc8\x2d25d9\x2da223\x2d875d37eaf0e7.mount: Deactivated successfully. Jul 9 13:02:39.484238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918200476.mount: Deactivated successfully. Jul 9 13:02:40.880538 containerd[1563]: time="2025-07-09T13:02:40.880449505Z" level=error msg="Failed to destroy network for sandbox \"69737aaee78c8512395765f65521f85fd2fc57c4b2934716b9799c510458df89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:40.883322 systemd[1]: run-netns-cni\x2d2138d7cb\x2d77ea\x2da50f\x2dbabf\x2de96d616adf53.mount: Deactivated successfully. Jul 9 13:02:41.170797 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:48964.service - OpenSSH per-connection server daemon (10.0.0.1:48964). Jul 9 13:02:41.195396 containerd[1563]: time="2025-07-09T13:02:41.194585064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mw6rj,Uid:6294eb09-ccd2-414a-90bb-afd069984c58,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69737aaee78c8512395765f65521f85fd2fc57c4b2934716b9799c510458df89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:41.196138 kubelet[2695]: E0709 13:02:41.195795 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69737aaee78c8512395765f65521f85fd2fc57c4b2934716b9799c510458df89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:41.196138 kubelet[2695]: E0709 13:02:41.195879 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69737aaee78c8512395765f65521f85fd2fc57c4b2934716b9799c510458df89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mw6rj" Jul 9 13:02:41.196138 kubelet[2695]: E0709 13:02:41.195900 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69737aaee78c8512395765f65521f85fd2fc57c4b2934716b9799c510458df89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mw6rj" Jul 9 13:02:41.196663 kubelet[2695]: E0709 13:02:41.195949 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mw6rj_kube-system(6294eb09-ccd2-414a-90bb-afd069984c58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mw6rj_kube-system(6294eb09-ccd2-414a-90bb-afd069984c58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69737aaee78c8512395765f65521f85fd2fc57c4b2934716b9799c510458df89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mw6rj" podUID="6294eb09-ccd2-414a-90bb-afd069984c58" Jul 9 13:02:41.215653 containerd[1563]: time="2025-07-09T13:02:41.213520363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:41.223933 containerd[1563]: time="2025-07-09T13:02:41.222778896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 9 13:02:41.223933 containerd[1563]: time="2025-07-09T13:02:41.223708619Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:41.232017 containerd[1563]: time="2025-07-09T13:02:41.231961644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:41.233479 containerd[1563]: time="2025-07-09T13:02:41.233443788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 15.001260885s" Jul 9 13:02:41.233538 containerd[1563]: time="2025-07-09T13:02:41.233483655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 9 13:02:41.250982 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 48964 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:41.254037 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:41.267649 containerd[1563]: time="2025-07-09T13:02:41.267124651Z" level=info msg="CreateContainer within sandbox \"56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 13:02:41.284622 systemd-logind[1538]: New session 15 of user core. Jul 9 13:02:41.288640 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 13:02:41.303399 containerd[1563]: time="2025-07-09T13:02:41.303297316Z" level=error msg="Failed to destroy network for sandbox \"cfdb60893294c4cef77186c1e1787baf11eed7cc72b432860e803fc1275bf3c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:41.304527 containerd[1563]: time="2025-07-09T13:02:41.304265564Z" level=info msg="Container 47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:02:41.333449 containerd[1563]: time="2025-07-09T13:02:41.333355313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766fbd8c89-f5gnv,Uid:be0c468f-8558-4c7f-9123-1d550c90de18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfdb60893294c4cef77186c1e1787baf11eed7cc72b432860e803fc1275bf3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:41.334282 kubelet[2695]: E0709 13:02:41.333830 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfdb60893294c4cef77186c1e1787baf11eed7cc72b432860e803fc1275bf3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:41.334282 kubelet[2695]: E0709 13:02:41.333931 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfdb60893294c4cef77186c1e1787baf11eed7cc72b432860e803fc1275bf3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" Jul 9 13:02:41.334282 kubelet[2695]: E0709 13:02:41.333954 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfdb60893294c4cef77186c1e1787baf11eed7cc72b432860e803fc1275bf3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" Jul 9 13:02:41.334604 kubelet[2695]: E0709 13:02:41.334017 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-766fbd8c89-f5gnv_calico-system(be0c468f-8558-4c7f-9123-1d550c90de18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-766fbd8c89-f5gnv_calico-system(be0c468f-8558-4c7f-9123-1d550c90de18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfdb60893294c4cef77186c1e1787baf11eed7cc72b432860e803fc1275bf3c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" podUID="be0c468f-8558-4c7f-9123-1d550c90de18" Jul 9 13:02:41.348408 containerd[1563]: time="2025-07-09T13:02:41.347633522Z" level=error msg="Failed to destroy network for sandbox \"da5f7e761e7a5a39c8936f0d1dcdfd845a1e8feff2280b654563412baf29ddbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:41.365880 containerd[1563]: time="2025-07-09T13:02:41.365801993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-glcsr,Uid:83f09b99-c580-4fba-9a5b-8dcbe9457bd1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5f7e761e7a5a39c8936f0d1dcdfd845a1e8feff2280b654563412baf29ddbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:41.366648 kubelet[2695]: E0709 13:02:41.366559 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5f7e761e7a5a39c8936f0d1dcdfd845a1e8feff2280b654563412baf29ddbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:02:41.366882 kubelet[2695]: E0709 13:02:41.366717 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5f7e761e7a5a39c8936f0d1dcdfd845a1e8feff2280b654563412baf29ddbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" Jul 9 13:02:41.366882 kubelet[2695]: E0709 13:02:41.366871 2695 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5f7e761e7a5a39c8936f0d1dcdfd845a1e8feff2280b654563412baf29ddbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" Jul 9 13:02:41.366983 kubelet[2695]: E0709 13:02:41.366937 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68df4b8f94-glcsr_calico-apiserver(83f09b99-c580-4fba-9a5b-8dcbe9457bd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68df4b8f94-glcsr_calico-apiserver(83f09b99-c580-4fba-9a5b-8dcbe9457bd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da5f7e761e7a5a39c8936f0d1dcdfd845a1e8feff2280b654563412baf29ddbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" podUID="83f09b99-c580-4fba-9a5b-8dcbe9457bd1" Jul 9 13:02:41.379961 containerd[1563]: time="2025-07-09T13:02:41.379913147Z" level=info msg="CreateContainer within sandbox \"56306d4a5296ad2ead9699eb437bf8d867321e974de887f71c471b5f746107d3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc\"" Jul 9 13:02:41.380752 containerd[1563]: time="2025-07-09T13:02:41.380580271Z" level=info msg="StartContainer for \"47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc\"" Jul 9 13:02:41.382472 containerd[1563]: time="2025-07-09T13:02:41.382440588Z" level=info msg="connecting to shim 47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc" address="unix:///run/containerd/s/43e8785a741c8350c8666c13b3bc4d8d7470042445d71431d392ddc167f83cec" protocol=ttrpc version=3 Jul 9 13:02:41.416420 systemd[1]: Started cri-containerd-47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc.scope - libcontainer container 47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc. Jul 9 13:02:41.440781 kubelet[2695]: I0709 13:02:41.440599 2695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-whisker-ca-bundle\") pod \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\" (UID: \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\") " Jul 9 13:02:41.440781 kubelet[2695]: I0709 13:02:41.440662 2695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-whisker-backend-key-pair\") pod \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\" (UID: \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\") " Jul 9 13:02:41.441075 kubelet[2695]: I0709 13:02:41.440689 2695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjksn\" (UniqueName: \"kubernetes.io/projected/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-kube-api-access-rjksn\") pod \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\" (UID: \"fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4\") " Jul 9 13:02:41.441949 kubelet[2695]: I0709 13:02:41.441826 2695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4" (UID: "fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 13:02:41.446204 kubelet[2695]: I0709 13:02:41.446170 2695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-kube-api-access-rjksn" (OuterVolumeSpecName: "kube-api-access-rjksn") pod "fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4" (UID: "fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4"). InnerVolumeSpecName "kube-api-access-rjksn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 13:02:41.446279 kubelet[2695]: I0709 13:02:41.446252 2695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4" (UID: "fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 9 13:02:41.447259 sshd[4187]: Connection closed by 10.0.0.1 port 48964 Jul 9 13:02:41.449440 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:41.453018 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:48964.service: Deactivated successfully. Jul 9 13:02:41.456003 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 13:02:41.458922 systemd-logind[1538]: Session 15 logged out. Waiting for processes to exit. Jul 9 13:02:41.462363 systemd-logind[1538]: Removed session 15. Jul 9 13:02:41.493514 containerd[1563]: time="2025-07-09T13:02:41.493451317Z" level=info msg="StartContainer for \"47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc\" returns successfully" Jul 9 13:02:41.542286 kubelet[2695]: I0709 13:02:41.542178 2695 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 9 13:02:41.542286 kubelet[2695]: I0709 13:02:41.542242 2695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjksn\" (UniqueName: \"kubernetes.io/projected/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-kube-api-access-rjksn\") on node \"localhost\" DevicePath \"\"" Jul 9 13:02:41.542286 kubelet[2695]: I0709 13:02:41.542255 2695 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 9 13:02:41.565566 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 13:02:41.566368 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 13:02:41.620666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254629757.mount: Deactivated successfully. Jul 9 13:02:41.620791 systemd[1]: run-netns-cni\x2da360def3\x2d0143\x2dec58\x2db3ff\x2dc3a81e9d32d0.mount: Deactivated successfully. Jul 9 13:02:41.620861 systemd[1]: run-netns-cni\x2d2eaad62f\x2d512c\x2d63e4\x2dd3db\x2d5c6b45b88111.mount: Deactivated successfully. Jul 9 13:02:41.620927 systemd[1]: var-lib-kubelet-pods-fedd9e0a\x2d54b7\x2d42a0\x2d9ec1\x2d72f6e99e97c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjksn.mount: Deactivated successfully. Jul 9 13:02:41.621013 systemd[1]: var-lib-kubelet-pods-fedd9e0a\x2d54b7\x2d42a0\x2d9ec1\x2d72f6e99e97c4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 9 13:02:42.302243 systemd[1]: Removed slice kubepods-besteffort-podfedd9e0a_54b7_42a0_9ec1_72f6e99e97c4.slice - libcontainer container kubepods-besteffort-podfedd9e0a_54b7_42a0_9ec1_72f6e99e97c4.slice. Jul 9 13:02:42.322824 kubelet[2695]: I0709 13:02:42.322554 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nsnl6" podStartSLOduration=2.126546633 podStartE2EDuration="50.322532958s" podCreationTimestamp="2025-07-09 13:01:52 +0000 UTC" firstStartedPulling="2025-07-09 13:01:53.043629562 +0000 UTC m=+18.071583335" lastFinishedPulling="2025-07-09 13:02:41.239615877 +0000 UTC m=+66.267569660" observedRunningTime="2025-07-09 13:02:42.312439322 +0000 UTC m=+67.340393135" watchObservedRunningTime="2025-07-09 13:02:42.322532958 +0000 UTC m=+67.350486741" Jul 9 13:02:43.095202 kubelet[2695]: E0709 13:02:43.094675 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:43.097130 kubelet[2695]: I0709 13:02:43.096927 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4" path="/var/lib/kubelet/pods/fedd9e0a-54b7-42a0-9ec1-72f6e99e97c4/volumes" Jul 9 13:02:43.347962 systemd-networkd[1485]: vxlan.calico: Link UP Jul 9 13:02:43.348359 systemd-networkd[1485]: vxlan.calico: Gained carrier Jul 9 13:02:43.682068 containerd[1563]: time="2025-07-09T13:02:43.681915128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc\" id:\"748e4db2d59a1edac04858d5955d68ccff4df591e9b934155abed59e0e7f52d4\" pid:4462 exit_status:1 exited_at:{seconds:1752066163 nanos:681509222}" Jul 9 13:02:43.767685 containerd[1563]: time="2025-07-09T13:02:43.767640518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc\" id:\"c43c167ad06c15c97211269127a7988376365479fece6dc7317e31d299178519\" pid:4500 exit_status:1 exited_at:{seconds:1752066163 nanos:767213181}" Jul 9 13:02:45.209585 systemd-networkd[1485]: vxlan.calico: Gained IPv6LL Jul 9 13:02:46.461993 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:34222.service - OpenSSH per-connection server daemon (10.0.0.1:34222). Jul 9 13:02:46.526529 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 34222 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:46.554339 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:46.559009 systemd-logind[1538]: New session 16 of user core. Jul 9 13:02:46.567524 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 13:02:46.709025 sshd[4521]: Connection closed by 10.0.0.1 port 34222 Jul 9 13:02:46.709457 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:46.715280 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:34222.service: Deactivated successfully. Jul 9 13:02:46.717557 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 13:02:46.718474 systemd-logind[1538]: Session 16 logged out. Waiting for processes to exit. Jul 9 13:02:46.719866 systemd-logind[1538]: Removed session 16. Jul 9 13:02:48.093988 kubelet[2695]: E0709 13:02:48.093919 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:48.094556 containerd[1563]: time="2025-07-09T13:02:48.094271421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-d7grl,Uid:eb6a2504-b349-4acd-ada6-a46fec76361c,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:02:48.362494 systemd-networkd[1485]: cali0f4e371ec22: Link UP Jul 9 13:02:48.363617 systemd-networkd[1485]: cali0f4e371ec22: Gained carrier Jul 9 13:02:48.382432 containerd[1563]: 2025-07-09 13:02:48.153 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0 calico-apiserver-68df4b8f94- calico-apiserver eb6a2504-b349-4acd-ada6-a46fec76361c 928 0 2025-07-09 13:01:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68df4b8f94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68df4b8f94-d7grl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f4e371ec22 [] [] }} ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-d7grl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-" Jul 9 13:02:48.382432 containerd[1563]: 2025-07-09 13:02:48.153 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-d7grl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" Jul 9 13:02:48.382432 containerd[1563]: 2025-07-09 13:02:48.314 [INFO][4548] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" HandleID="k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Workload="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.317 [INFO][4548] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" HandleID="k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Workload="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd8c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68df4b8f94-d7grl", "timestamp":"2025-07-09 13:02:48.314175245 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.317 [INFO][4548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.319 [INFO][4548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.319 [INFO][4548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.330 [INFO][4548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" host="localhost" Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.336 [INFO][4548] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.341 [INFO][4548] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.343 [INFO][4548] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.344 [INFO][4548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:48.382758 containerd[1563]: 2025-07-09 13:02:48.345 [INFO][4548] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" host="localhost" Jul 9 13:02:48.383076 containerd[1563]: 2025-07-09 13:02:48.346 [INFO][4548] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447 Jul 9 13:02:48.383076 containerd[1563]: 2025-07-09 13:02:48.349 [INFO][4548] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" host="localhost" Jul 9 13:02:48.383076 containerd[1563]: 2025-07-09 13:02:48.354 [INFO][4548] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" host="localhost" Jul 9 13:02:48.383076 containerd[1563]: 2025-07-09 13:02:48.354 [INFO][4548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" host="localhost" Jul 9 13:02:48.383076 containerd[1563]: 2025-07-09 13:02:48.354 [INFO][4548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:02:48.383076 containerd[1563]: 2025-07-09 13:02:48.354 [INFO][4548] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" HandleID="k8s-pod-network.4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Workload="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" Jul 9 13:02:48.383248 containerd[1563]: 2025-07-09 13:02:48.357 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-d7grl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0", GenerateName:"calico-apiserver-68df4b8f94-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb6a2504-b349-4acd-ada6-a46fec76361c", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68df4b8f94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68df4b8f94-d7grl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f4e371ec22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:48.383324 containerd[1563]: 2025-07-09 13:02:48.358 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-d7grl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" Jul 9 13:02:48.383324 containerd[1563]: 2025-07-09 13:02:48.358 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f4e371ec22 ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-d7grl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" Jul 9 13:02:48.383324 containerd[1563]: 2025-07-09 13:02:48.364 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-d7grl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" Jul 9 13:02:48.383466 containerd[1563]: 2025-07-09 13:02:48.365 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-d7grl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0", GenerateName:"calico-apiserver-68df4b8f94-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb6a2504-b349-4acd-ada6-a46fec76361c", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68df4b8f94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447", Pod:"calico-apiserver-68df4b8f94-d7grl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f4e371ec22", MAC:"2e:46:36:c6:6e:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:48.383545 containerd[1563]: 2025-07-09 13:02:48.378 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-d7grl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--d7grl-eth0" Jul 9 13:02:48.505361 containerd[1563]: time="2025-07-09T13:02:48.505299903Z" level=info msg="connecting to shim 4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447" address="unix:///run/containerd/s/1cb1c77e497f744ec3e3e9d748c955fbd66c2de0145e34586c5f2cc06d8e9936" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:02:48.530540 systemd[1]: Started cri-containerd-4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447.scope - libcontainer container 4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447. Jul 9 13:02:48.545468 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:02:48.609179 containerd[1563]: time="2025-07-09T13:02:48.609124035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-d7grl,Uid:eb6a2504-b349-4acd-ada6-a46fec76361c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447\"" Jul 9 13:02:48.619278 containerd[1563]: time="2025-07-09T13:02:48.619157720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 13:02:49.093790 kubelet[2695]: E0709 13:02:49.093700 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:49.689568 systemd-networkd[1485]: cali0f4e371ec22: Gained IPv6LL Jul 9 13:02:50.093905 kubelet[2695]: E0709 13:02:50.093853 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:50.094423 containerd[1563]: time="2025-07-09T13:02:50.094300930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gw8lq,Uid:30510e32-f8e3-41cd-b70c-1ce6900a24ee,Namespace:kube-system,Attempt:0,}" Jul 9 13:02:50.298811 systemd-networkd[1485]: calia8119bd6606: Link UP Jul 9 13:02:50.300670 systemd-networkd[1485]: calia8119bd6606: Gained carrier Jul 9 13:02:50.345978 containerd[1563]: 2025-07-09 13:02:50.202 [INFO][4622] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0 coredns-7c65d6cfc9- kube-system 30510e32-f8e3-41cd-b70c-1ce6900a24ee 926 0 2025-07-09 13:01:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gw8lq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia8119bd6606 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gw8lq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gw8lq-" Jul 9 13:02:50.345978 containerd[1563]: 2025-07-09 13:02:50.202 [INFO][4622] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gw8lq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" Jul 9 13:02:50.345978 containerd[1563]: 2025-07-09 13:02:50.232 [INFO][4636] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" HandleID="k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Workload="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.233 [INFO][4636] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" HandleID="k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Workload="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000495f30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gw8lq", "timestamp":"2025-07-09 13:02:50.232973356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.233 [INFO][4636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.233 [INFO][4636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.233 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.240 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" host="localhost" Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.244 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.249 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.250 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.254 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:50.346175 containerd[1563]: 2025-07-09 13:02:50.254 [INFO][4636] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" host="localhost" Jul 9 13:02:50.346868 containerd[1563]: 2025-07-09 13:02:50.256 [INFO][4636] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6 Jul 9 13:02:50.346868 containerd[1563]: 2025-07-09 13:02:50.272 [INFO][4636] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" host="localhost" Jul 9 13:02:50.346868 containerd[1563]: 2025-07-09 13:02:50.290 [INFO][4636] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" host="localhost" Jul 9 13:02:50.346868 containerd[1563]: 2025-07-09 13:02:50.291 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" host="localhost" Jul 9 13:02:50.346868 containerd[1563]: 2025-07-09 13:02:50.291 [INFO][4636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:02:50.346868 containerd[1563]: 2025-07-09 13:02:50.291 [INFO][4636] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" HandleID="k8s-pod-network.3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Workload="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" Jul 9 13:02:50.347000 containerd[1563]: 2025-07-09 13:02:50.294 [INFO][4622] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gw8lq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"30510e32-f8e3-41cd-b70c-1ce6900a24ee", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gw8lq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8119bd6606", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:50.347075 containerd[1563]: 2025-07-09 13:02:50.294 [INFO][4622] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gw8lq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" Jul 9 13:02:50.347075 containerd[1563]: 2025-07-09 13:02:50.294 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8119bd6606 ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gw8lq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" Jul 9 13:02:50.347075 containerd[1563]: 2025-07-09 13:02:50.299 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gw8lq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" Jul 9 13:02:50.347228 containerd[1563]: 2025-07-09 13:02:50.302 [INFO][4622] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gw8lq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"30510e32-f8e3-41cd-b70c-1ce6900a24ee", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6", Pod:"coredns-7c65d6cfc9-gw8lq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8119bd6606", MAC:"66:6e:24:46:aa:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:50.347228 containerd[1563]: 2025-07-09 13:02:50.342 [INFO][4622] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gw8lq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gw8lq-eth0" Jul 9 13:02:50.440974 containerd[1563]: time="2025-07-09T13:02:50.440913039Z" level=info msg="connecting to shim 3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6" address="unix:///run/containerd/s/4a284c80e77c9d4a2940cd1ee8057efb7900b09a8315b60a76f86c2b82f07750" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:02:50.472554 systemd[1]: Started cri-containerd-3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6.scope - libcontainer container 3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6. Jul 9 13:02:50.488087 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:02:50.524893 containerd[1563]: time="2025-07-09T13:02:50.524834727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gw8lq,Uid:30510e32-f8e3-41cd-b70c-1ce6900a24ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6\"" Jul 9 13:02:50.525742 kubelet[2695]: E0709 13:02:50.525710 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:50.545043 containerd[1563]: time="2025-07-09T13:02:50.544982176Z" level=info msg="CreateContainer within sandbox \"3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 13:02:50.561223 containerd[1563]: time="2025-07-09T13:02:50.560487627Z" level=info msg="Container 18335ac91f386b740b64faaaaa4adb54c81cf994e2a864556a72e99c33b62f1a: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:02:50.561943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165716361.mount: Deactivated successfully. Jul 9 13:02:50.572482 containerd[1563]: time="2025-07-09T13:02:50.572429168Z" level=info msg="CreateContainer within sandbox \"3250895cc3cb48aa29ee23cb8c8640f5ce7f5c7563810876ab76a74499113dd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18335ac91f386b740b64faaaaa4adb54c81cf994e2a864556a72e99c33b62f1a\"" Jul 9 13:02:50.576201 containerd[1563]: time="2025-07-09T13:02:50.576164710Z" level=info msg="StartContainer for \"18335ac91f386b740b64faaaaa4adb54c81cf994e2a864556a72e99c33b62f1a\"" Jul 9 13:02:50.576992 containerd[1563]: time="2025-07-09T13:02:50.576965601Z" level=info msg="connecting to shim 18335ac91f386b740b64faaaaa4adb54c81cf994e2a864556a72e99c33b62f1a" address="unix:///run/containerd/s/4a284c80e77c9d4a2940cd1ee8057efb7900b09a8315b60a76f86c2b82f07750" protocol=ttrpc version=3 Jul 9 13:02:50.601512 systemd[1]: Started cri-containerd-18335ac91f386b740b64faaaaa4adb54c81cf994e2a864556a72e99c33b62f1a.scope - libcontainer container 18335ac91f386b740b64faaaaa4adb54c81cf994e2a864556a72e99c33b62f1a. Jul 9 13:02:50.636587 containerd[1563]: time="2025-07-09T13:02:50.636545256Z" level=info msg="StartContainer for \"18335ac91f386b740b64faaaaa4adb54c81cf994e2a864556a72e99c33b62f1a\" returns successfully" Jul 9 13:02:51.094447 containerd[1563]: time="2025-07-09T13:02:51.094343522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4tjz,Uid:0a1ade6c-7a47-40a7-a38c-b0080894987b,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:51.259115 systemd-networkd[1485]: calia450a52b787: Link UP Jul 9 13:02:51.259337 systemd-networkd[1485]: calia450a52b787: Gained carrier Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.136 [INFO][4731] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x4tjz-eth0 csi-node-driver- calico-system 0a1ade6c-7a47-40a7-a38c-b0080894987b 705 0 2025-07-09 13:01:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x4tjz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia450a52b787 [] [] }} ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Namespace="calico-system" Pod="csi-node-driver-x4tjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4tjz-" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.137 [INFO][4731] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Namespace="calico-system" Pod="csi-node-driver-x4tjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4tjz-eth0" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.169 [INFO][4746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" HandleID="k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Workload="localhost-k8s-csi--node--driver--x4tjz-eth0" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.169 [INFO][4746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" HandleID="k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Workload="localhost-k8s-csi--node--driver--x4tjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051b300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x4tjz", "timestamp":"2025-07-09 13:02:51.169349547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.169 [INFO][4746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.169 [INFO][4746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.169 [INFO][4746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.176 [INFO][4746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.180 [INFO][4746] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.183 [INFO][4746] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.185 [INFO][4746] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.187 [INFO][4746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.187 [INFO][4746] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.188 [INFO][4746] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.243 [INFO][4746] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.253 [INFO][4746] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.253 [INFO][4746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" host="localhost" Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.253 [INFO][4746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:02:51.277055 containerd[1563]: 2025-07-09 13:02:51.253 [INFO][4746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" HandleID="k8s-pod-network.68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Workload="localhost-k8s-csi--node--driver--x4tjz-eth0" Jul 9 13:02:51.277725 containerd[1563]: 2025-07-09 13:02:51.256 [INFO][4731] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Namespace="calico-system" Pod="csi-node-driver-x4tjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4tjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4tjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a1ade6c-7a47-40a7-a38c-b0080894987b", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x4tjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia450a52b787", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:51.277725 containerd[1563]: 2025-07-09 13:02:51.256 [INFO][4731] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Namespace="calico-system" Pod="csi-node-driver-x4tjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4tjz-eth0" Jul 9 13:02:51.277725 containerd[1563]: 2025-07-09 13:02:51.256 [INFO][4731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia450a52b787 ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Namespace="calico-system" Pod="csi-node-driver-x4tjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4tjz-eth0" Jul 9 13:02:51.277725 containerd[1563]: 2025-07-09 13:02:51.259 [INFO][4731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Namespace="calico-system" Pod="csi-node-driver-x4tjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4tjz-eth0" Jul 9 13:02:51.277725 containerd[1563]: 2025-07-09 13:02:51.260 [INFO][4731] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Namespace="calico-system" Pod="csi-node-driver-x4tjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4tjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4tjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a1ade6c-7a47-40a7-a38c-b0080894987b", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca", Pod:"csi-node-driver-x4tjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia450a52b787", MAC:"92:15:66:4b:63:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:51.277725 containerd[1563]: 2025-07-09 13:02:51.272 [INFO][4731] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" Namespace="calico-system" Pod="csi-node-driver-x4tjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4tjz-eth0" Jul 9 13:02:51.297205 containerd[1563]: time="2025-07-09T13:02:51.297161707Z" level=info msg="connecting to shim 68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca" address="unix:///run/containerd/s/445322a6bb92bceb579e193c33049c140b658e98fbac87685e5f5bd4974c7349" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:02:51.311385 kubelet[2695]: E0709 13:02:51.309809 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:51.327899 systemd[1]: Started cri-containerd-68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca.scope - libcontainer container 68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca. Jul 9 13:02:51.341647 kubelet[2695]: I0709 13:02:51.341511 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gw8lq" podStartSLOduration=70.341470173 podStartE2EDuration="1m10.341470173s" podCreationTimestamp="2025-07-09 13:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:02:51.324967523 +0000 UTC m=+76.352921296" watchObservedRunningTime="2025-07-09 13:02:51.341470173 +0000 UTC m=+76.369423946" Jul 9 13:02:51.348139 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:02:51.368224 containerd[1563]: time="2025-07-09T13:02:51.368168548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4tjz,Uid:0a1ade6c-7a47-40a7-a38c-b0080894987b,Namespace:calico-system,Attempt:0,} returns sandbox id \"68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca\"" Jul 9 13:02:51.723331 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:34224.service - OpenSSH per-connection server daemon (10.0.0.1:34224). Jul 9 13:02:51.804526 sshd[4814]: Accepted publickey for core from 10.0.0.1 port 34224 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:51.806355 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:51.811330 systemd-logind[1538]: New session 17 of user core. Jul 9 13:02:51.820594 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 13:02:51.930456 systemd-networkd[1485]: calia8119bd6606: Gained IPv6LL Jul 9 13:02:51.971200 sshd[4817]: Connection closed by 10.0.0.1 port 34224 Jul 9 13:02:51.971611 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:51.978384 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:34224.service: Deactivated successfully. Jul 9 13:02:51.980701 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 13:02:51.981646 systemd-logind[1538]: Session 17 logged out. Waiting for processes to exit. Jul 9 13:02:51.983186 systemd-logind[1538]: Removed session 17. Jul 9 13:02:52.347225 kubelet[2695]: E0709 13:02:52.347163 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:52.569631 systemd-networkd[1485]: calia450a52b787: Gained IPv6LL Jul 9 13:02:53.097093 containerd[1563]: time="2025-07-09T13:02:53.097036777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6p7m6,Uid:6ae36fb0-52ed-4533-b91d-fab218944275,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:53.343208 kubelet[2695]: E0709 13:02:53.343162 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:53.382408 systemd-networkd[1485]: calia3561c6f57a: Link UP Jul 9 13:02:53.382620 systemd-networkd[1485]: calia3561c6f57a: Gained carrier Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.218 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0 goldmane-58fd7646b9- calico-system 6ae36fb0-52ed-4533-b91d-fab218944275 925 0 2025-07-09 13:01:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-6p7m6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia3561c6f57a [] [] }} ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Namespace="calico-system" Pod="goldmane-58fd7646b9-6p7m6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6p7m6-" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.219 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Namespace="calico-system" Pod="goldmane-58fd7646b9-6p7m6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.286 [INFO][4849] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" HandleID="k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Workload="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.287 [INFO][4849] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" HandleID="k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Workload="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001335f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-6p7m6", "timestamp":"2025-07-09 13:02:53.286411412 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.287 [INFO][4849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.287 [INFO][4849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.287 [INFO][4849] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.296 [INFO][4849] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.302 [INFO][4849] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.309 [INFO][4849] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.311 [INFO][4849] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.314 [INFO][4849] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.314 [INFO][4849] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.315 [INFO][4849] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504 Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.359 [INFO][4849] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.370 [INFO][4849] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.371 [INFO][4849] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" host="localhost" Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.371 [INFO][4849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:02:53.412000 containerd[1563]: 2025-07-09 13:02:53.371 [INFO][4849] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" HandleID="k8s-pod-network.dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Workload="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" Jul 9 13:02:53.413503 containerd[1563]: 2025-07-09 13:02:53.378 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Namespace="calico-system" Pod="goldmane-58fd7646b9-6p7m6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6ae36fb0-52ed-4533-b91d-fab218944275", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-6p7m6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3561c6f57a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:53.413503 containerd[1563]: 2025-07-09 13:02:53.378 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Namespace="calico-system" Pod="goldmane-58fd7646b9-6p7m6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" Jul 9 13:02:53.413503 containerd[1563]: 2025-07-09 13:02:53.378 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3561c6f57a ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Namespace="calico-system" Pod="goldmane-58fd7646b9-6p7m6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" Jul 9 13:02:53.413503 containerd[1563]: 2025-07-09 13:02:53.389 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Namespace="calico-system" Pod="goldmane-58fd7646b9-6p7m6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" Jul 9 13:02:53.413503 containerd[1563]: 2025-07-09 13:02:53.392 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Namespace="calico-system" Pod="goldmane-58fd7646b9-6p7m6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6ae36fb0-52ed-4533-b91d-fab218944275", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504", Pod:"goldmane-58fd7646b9-6p7m6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3561c6f57a", MAC:"32:28:a1:df:a1:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:53.413503 containerd[1563]: 2025-07-09 13:02:53.403 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" Namespace="calico-system" Pod="goldmane-58fd7646b9-6p7m6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6p7m6-eth0" Jul 9 13:02:53.449008 containerd[1563]: time="2025-07-09T13:02:53.448527913Z" level=info msg="connecting to shim dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504" address="unix:///run/containerd/s/7f644fd70de3c127991e5b3b3a2f9567e743db6ba8b7aab2633fe3c7295b237e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:02:53.468751 containerd[1563]: time="2025-07-09T13:02:53.468699942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:53.469934 containerd[1563]: time="2025-07-09T13:02:53.469856274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 9 13:02:53.471072 containerd[1563]: time="2025-07-09T13:02:53.471040610Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:53.476975 containerd[1563]: time="2025-07-09T13:02:53.476681969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.857480644s" Jul 9 13:02:53.476975 containerd[1563]: time="2025-07-09T13:02:53.476712888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 9 13:02:53.478391 containerd[1563]: time="2025-07-09T13:02:53.478327521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 13:02:53.480750 containerd[1563]: time="2025-07-09T13:02:53.479531333Z" level=info msg="CreateContainer within sandbox \"4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 13:02:53.480544 systemd[1]: Started cri-containerd-dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504.scope - libcontainer container dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504. Jul 9 13:02:53.482120 containerd[1563]: time="2025-07-09T13:02:53.482057277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:02:53.495786 containerd[1563]: time="2025-07-09T13:02:53.493541212Z" level=info msg="Container f35e1554face647a9d8e4d2b09184e41eb5ebe68c491bbc4d82247e5eb4ecb9b: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:02:53.516450 containerd[1563]: time="2025-07-09T13:02:53.514845216Z" level=info msg="CreateContainer within sandbox \"4576385dd08fcd1da71c6e8652db40f5cacb274ca81bfd9fa09a27c6f6427447\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f35e1554face647a9d8e4d2b09184e41eb5ebe68c491bbc4d82247e5eb4ecb9b\"" Jul 9 13:02:53.517139 containerd[1563]: time="2025-07-09T13:02:53.517087755Z" level=info msg="StartContainer for \"f35e1554face647a9d8e4d2b09184e41eb5ebe68c491bbc4d82247e5eb4ecb9b\"" Jul 9 13:02:53.518403 containerd[1563]: time="2025-07-09T13:02:53.518355760Z" level=info msg="connecting to shim f35e1554face647a9d8e4d2b09184e41eb5ebe68c491bbc4d82247e5eb4ecb9b" address="unix:///run/containerd/s/1cb1c77e497f744ec3e3e9d748c955fbd66c2de0145e34586c5f2cc06d8e9936" protocol=ttrpc version=3 Jul 9 13:02:53.571673 systemd[1]: Started cri-containerd-f35e1554face647a9d8e4d2b09184e41eb5ebe68c491bbc4d82247e5eb4ecb9b.scope - libcontainer container f35e1554face647a9d8e4d2b09184e41eb5ebe68c491bbc4d82247e5eb4ecb9b. Jul 9 13:02:53.592434 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:02:53.636433 containerd[1563]: time="2025-07-09T13:02:53.635532244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6p7m6,Uid:6ae36fb0-52ed-4533-b91d-fab218944275,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504\"" Jul 9 13:02:53.651291 containerd[1563]: time="2025-07-09T13:02:53.651256648Z" level=info msg="StartContainer for \"f35e1554face647a9d8e4d2b09184e41eb5ebe68c491bbc4d82247e5eb4ecb9b\" returns successfully" Jul 9 13:02:54.371820 kubelet[2695]: I0709 13:02:54.371716 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68df4b8f94-d7grl" podStartSLOduration=60.512470428 podStartE2EDuration="1m5.371693804s" podCreationTimestamp="2025-07-09 13:01:49 +0000 UTC" firstStartedPulling="2025-07-09 13:02:48.61879991 +0000 UTC m=+73.646753693" lastFinishedPulling="2025-07-09 13:02:53.478023286 +0000 UTC m=+78.505977069" observedRunningTime="2025-07-09 13:02:54.371171962 +0000 UTC m=+79.399125745" watchObservedRunningTime="2025-07-09 13:02:54.371693804 +0000 UTC m=+79.399647587" Jul 9 13:02:54.937636 systemd-networkd[1485]: calia3561c6f57a: Gained IPv6LL Jul 9 13:02:55.094050 kubelet[2695]: E0709 13:02:55.093990 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:55.094777 containerd[1563]: time="2025-07-09T13:02:55.094438955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mw6rj,Uid:6294eb09-ccd2-414a-90bb-afd069984c58,Namespace:kube-system,Attempt:0,}" Jul 9 13:02:55.355525 kubelet[2695]: I0709 13:02:55.355480 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 13:02:56.094722 containerd[1563]: time="2025-07-09T13:02:56.094659370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-glcsr,Uid:83f09b99-c580-4fba-9a5b-8dcbe9457bd1,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:02:56.094893 containerd[1563]: time="2025-07-09T13:02:56.094723612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766fbd8c89-f5gnv,Uid:be0c468f-8558-4c7f-9123-1d550c90de18,Namespace:calico-system,Attempt:0,}" Jul 9 13:02:56.168758 systemd-networkd[1485]: caliebb599d1111: Link UP Jul 9 13:02:56.170267 systemd-networkd[1485]: caliebb599d1111: Gained carrier Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.454 [INFO][4965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0 coredns-7c65d6cfc9- kube-system 6294eb09-ccd2-414a-90bb-afd069984c58 916 0 2025-07-09 13:01:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mw6rj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebb599d1111 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mw6rj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mw6rj-" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.454 [INFO][4965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mw6rj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.480 [INFO][4982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" HandleID="k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Workload="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.480 [INFO][4982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" HandleID="k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Workload="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mw6rj", "timestamp":"2025-07-09 13:02:55.48011178 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.480 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.480 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.480 [INFO][4982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.659 [INFO][4982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.692 [INFO][4982] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.696 [INFO][4982] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.698 [INFO][4982] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.699 [INFO][4982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.699 [INFO][4982] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.701 [INFO][4982] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8 Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:55.867 [INFO][4982] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:56.162 [INFO][4982] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:56.162 [INFO][4982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" host="localhost" Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:56.162 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:02:56.291748 containerd[1563]: 2025-07-09 13:02:56.162 [INFO][4982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" HandleID="k8s-pod-network.c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Workload="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" Jul 9 13:02:56.293395 containerd[1563]: 2025-07-09 13:02:56.165 [INFO][4965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mw6rj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6294eb09-ccd2-414a-90bb-afd069984c58", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mw6rj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebb599d1111", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:56.293395 containerd[1563]: 2025-07-09 13:02:56.165 [INFO][4965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mw6rj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" Jul 9 13:02:56.293395 containerd[1563]: 2025-07-09 13:02:56.165 [INFO][4965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebb599d1111 ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mw6rj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" Jul 9 13:02:56.293395 containerd[1563]: 2025-07-09 13:02:56.169 [INFO][4965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mw6rj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" Jul 9 13:02:56.293395 containerd[1563]: 2025-07-09 13:02:56.170 [INFO][4965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mw6rj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6294eb09-ccd2-414a-90bb-afd069984c58", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8", Pod:"coredns-7c65d6cfc9-mw6rj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebb599d1111", MAC:"7a:da:69:48:2d:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:56.293395 containerd[1563]: 2025-07-09 13:02:56.283 [INFO][4965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mw6rj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mw6rj-eth0" Jul 9 13:02:56.486783 containerd[1563]: time="2025-07-09T13:02:56.486634918Z" level=info msg="connecting to shim c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8" address="unix:///run/containerd/s/ae8fd0743810b8362f5666505b8a0161f8ddb54bcbca94ad093a8f2f86222c45" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:02:56.503455 systemd-networkd[1485]: cali563da18f472: Link UP Jul 9 13:02:56.504938 systemd-networkd[1485]: cali563da18f472: Gained carrier Jul 9 13:02:56.520868 systemd[1]: Started cri-containerd-c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8.scope - libcontainer container c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8. Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.368 [INFO][5002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0 calico-apiserver-68df4b8f94- calico-apiserver 83f09b99-c580-4fba-9a5b-8dcbe9457bd1 919 0 2025-07-09 13:01:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68df4b8f94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68df4b8f94-glcsr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali563da18f472 [] [] }} ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-glcsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.368 [INFO][5002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-glcsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.450 [INFO][5019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" HandleID="k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Workload="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.450 [INFO][5019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" HandleID="k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Workload="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000d0470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68df4b8f94-glcsr", "timestamp":"2025-07-09 13:02:56.450114234 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.450 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.450 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.450 [INFO][5019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.458 [INFO][5019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.464 [INFO][5019] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.471 [INFO][5019] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.473 [INFO][5019] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.477 [INFO][5019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.477 [INFO][5019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.479 [INFO][5019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.483 [INFO][5019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.490 [INFO][5019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.491 [INFO][5019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" host="localhost" Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.491 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:02:56.522939 containerd[1563]: 2025-07-09 13:02:56.491 [INFO][5019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" HandleID="k8s-pod-network.3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Workload="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" Jul 9 13:02:56.523956 containerd[1563]: 2025-07-09 13:02:56.498 [INFO][5002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-glcsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0", GenerateName:"calico-apiserver-68df4b8f94-", Namespace:"calico-apiserver", SelfLink:"", UID:"83f09b99-c580-4fba-9a5b-8dcbe9457bd1", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68df4b8f94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68df4b8f94-glcsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali563da18f472", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:56.523956 containerd[1563]: 2025-07-09 13:02:56.500 [INFO][5002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-glcsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" Jul 9 13:02:56.523956 containerd[1563]: 2025-07-09 13:02:56.500 [INFO][5002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali563da18f472 ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-glcsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" Jul 9 13:02:56.523956 containerd[1563]: 2025-07-09 13:02:56.504 [INFO][5002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-glcsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" Jul 9 13:02:56.523956 containerd[1563]: 2025-07-09 13:02:56.504 [INFO][5002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-glcsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0", GenerateName:"calico-apiserver-68df4b8f94-", Namespace:"calico-apiserver", SelfLink:"", UID:"83f09b99-c580-4fba-9a5b-8dcbe9457bd1", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68df4b8f94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c", Pod:"calico-apiserver-68df4b8f94-glcsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali563da18f472", MAC:"16:1e:6c:b8:ee:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:56.523956 containerd[1563]: 2025-07-09 13:02:56.516 [INFO][5002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" Namespace="calico-apiserver" Pod="calico-apiserver-68df4b8f94-glcsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--68df4b8f94--glcsr-eth0" Jul 9 13:02:56.543583 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:02:56.557645 containerd[1563]: time="2025-07-09T13:02:56.557531392Z" level=info msg="connecting to shim 3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c" address="unix:///run/containerd/s/d374b530f5bba1ca5c7d0eb24daaa67bd8baca90890320a4cb339c92b72474d9" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:02:56.587730 systemd[1]: Started cri-containerd-3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c.scope - libcontainer container 3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c. Jul 9 13:02:56.612161 systemd-networkd[1485]: cali63fc7a3f92a: Link UP Jul 9 13:02:56.612770 systemd-networkd[1485]: cali63fc7a3f92a: Gained carrier Jul 9 13:02:56.617625 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:02:56.696791 containerd[1563]: time="2025-07-09T13:02:56.696733240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mw6rj,Uid:6294eb09-ccd2-414a-90bb-afd069984c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8\"" Jul 9 13:02:56.697863 kubelet[2695]: E0709 13:02:56.697770 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:56.705164 containerd[1563]: time="2025-07-09T13:02:56.704966421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68df4b8f94-glcsr,Uid:83f09b99-c580-4fba-9a5b-8dcbe9457bd1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c\"" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.457 [INFO][5018] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0 calico-kube-controllers-766fbd8c89- calico-system be0c468f-8558-4c7f-9123-1d550c90de18 927 0 2025-07-09 13:01:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:766fbd8c89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-766fbd8c89-f5gnv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali63fc7a3f92a [] [] }} ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Namespace="calico-system" Pod="calico-kube-controllers-766fbd8c89-f5gnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.457 [INFO][5018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Namespace="calico-system" Pod="calico-kube-controllers-766fbd8c89-f5gnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.518 [INFO][5040] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" HandleID="k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Workload="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.519 [INFO][5040] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" HandleID="k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Workload="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-766fbd8c89-f5gnv", "timestamp":"2025-07-09 13:02:56.518729934 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.519 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.519 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.519 [INFO][5040] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.559 [INFO][5040] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.571 [INFO][5040] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.576 [INFO][5040] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.579 [INFO][5040] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.581 [INFO][5040] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.581 [INFO][5040] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.582 [INFO][5040] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998 Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.589 [INFO][5040] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.602 [INFO][5040] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.603 [INFO][5040] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" host="localhost" Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.603 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:02:56.705493 containerd[1563]: 2025-07-09 13:02:56.603 [INFO][5040] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" HandleID="k8s-pod-network.5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Workload="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" Jul 9 13:02:56.706158 containerd[1563]: 2025-07-09 13:02:56.608 [INFO][5018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Namespace="calico-system" Pod="calico-kube-controllers-766fbd8c89-f5gnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0", GenerateName:"calico-kube-controllers-766fbd8c89-", Namespace:"calico-system", SelfLink:"", UID:"be0c468f-8558-4c7f-9123-1d550c90de18", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"766fbd8c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-766fbd8c89-f5gnv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63fc7a3f92a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:56.706158 containerd[1563]: 2025-07-09 13:02:56.608 [INFO][5018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Namespace="calico-system" Pod="calico-kube-controllers-766fbd8c89-f5gnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" Jul 9 13:02:56.706158 containerd[1563]: 2025-07-09 13:02:56.608 [INFO][5018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63fc7a3f92a ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Namespace="calico-system" Pod="calico-kube-controllers-766fbd8c89-f5gnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" Jul 9 13:02:56.706158 containerd[1563]: 2025-07-09 13:02:56.610 [INFO][5018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Namespace="calico-system" Pod="calico-kube-controllers-766fbd8c89-f5gnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" Jul 9 13:02:56.706158 containerd[1563]: 2025-07-09 13:02:56.611 [INFO][5018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Namespace="calico-system" Pod="calico-kube-controllers-766fbd8c89-f5gnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0", GenerateName:"calico-kube-controllers-766fbd8c89-", Namespace:"calico-system", SelfLink:"", UID:"be0c468f-8558-4c7f-9123-1d550c90de18", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"766fbd8c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998", Pod:"calico-kube-controllers-766fbd8c89-f5gnv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63fc7a3f92a", MAC:"da:50:06:3e:20:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:02:56.706158 containerd[1563]: 2025-07-09 13:02:56.701 [INFO][5018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" Namespace="calico-system" Pod="calico-kube-controllers-766fbd8c89-f5gnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766fbd8c89--f5gnv-eth0" Jul 9 13:02:56.706911 containerd[1563]: time="2025-07-09T13:02:56.706872207Z" level=info msg="CreateContainer within sandbox \"c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 13:02:56.711277 containerd[1563]: time="2025-07-09T13:02:56.710454186Z" level=info msg="CreateContainer within sandbox \"3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 13:02:56.720943 containerd[1563]: time="2025-07-09T13:02:56.720888418Z" level=info msg="Container 07d1142b295ce6420ee40c3f07c29283dc4e7638452bef0406bcd1620ba24539: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:02:56.727028 containerd[1563]: time="2025-07-09T13:02:56.726985231Z" level=info msg="Container e8df827108bb54e35f961473bb51622db911562b4f4508f552e233d33821e8a6: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:02:56.738453 containerd[1563]: time="2025-07-09T13:02:56.738284423Z" level=info msg="CreateContainer within sandbox \"c8fb8aaada84fd07633a4b76ed7615faa8975217c71409e0a91f278e1b3775b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07d1142b295ce6420ee40c3f07c29283dc4e7638452bef0406bcd1620ba24539\"" Jul 9 13:02:56.740116 containerd[1563]: time="2025-07-09T13:02:56.740012106Z" level=info msg="StartContainer for \"07d1142b295ce6420ee40c3f07c29283dc4e7638452bef0406bcd1620ba24539\"" Jul 9 13:02:56.741308 containerd[1563]: time="2025-07-09T13:02:56.741278795Z" level=info msg="connecting to shim 07d1142b295ce6420ee40c3f07c29283dc4e7638452bef0406bcd1620ba24539" address="unix:///run/containerd/s/ae8fd0743810b8362f5666505b8a0161f8ddb54bcbca94ad093a8f2f86222c45" protocol=ttrpc version=3 Jul 9 13:02:56.746308 containerd[1563]: time="2025-07-09T13:02:56.746145340Z" level=info msg="CreateContainer within sandbox \"3567d9e68c07fad6211a856cc725fd4ae0da863db410a17e6c6dc366119e743c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e8df827108bb54e35f961473bb51622db911562b4f4508f552e233d33821e8a6\"" Jul 9 13:02:56.747455 containerd[1563]: time="2025-07-09T13:02:56.747408932Z" level=info msg="StartContainer for \"e8df827108bb54e35f961473bb51622db911562b4f4508f552e233d33821e8a6\"" Jul 9 13:02:56.749076 containerd[1563]: time="2025-07-09T13:02:56.748987290Z" level=info msg="connecting to shim e8df827108bb54e35f961473bb51622db911562b4f4508f552e233d33821e8a6" address="unix:///run/containerd/s/d374b530f5bba1ca5c7d0eb24daaa67bd8baca90890320a4cb339c92b72474d9" protocol=ttrpc version=3 Jul 9 13:02:56.749773 containerd[1563]: time="2025-07-09T13:02:56.749737198Z" level=info msg="connecting to shim 5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998" address="unix:///run/containerd/s/7eb5794b41ce1d3b2aed0a46de72b7ac07a8e717ba4212abc637d530ec26ed97" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:02:56.770783 systemd[1]: Started cri-containerd-07d1142b295ce6420ee40c3f07c29283dc4e7638452bef0406bcd1620ba24539.scope - libcontainer container 07d1142b295ce6420ee40c3f07c29283dc4e7638452bef0406bcd1620ba24539. Jul 9 13:02:56.783601 systemd[1]: Started cri-containerd-e8df827108bb54e35f961473bb51622db911562b4f4508f552e233d33821e8a6.scope - libcontainer container e8df827108bb54e35f961473bb51622db911562b4f4508f552e233d33821e8a6. Jul 9 13:02:56.803618 systemd[1]: Started cri-containerd-5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998.scope - libcontainer container 5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998. Jul 9 13:02:56.825681 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:02:56.841465 containerd[1563]: time="2025-07-09T13:02:56.841346948Z" level=info msg="StartContainer for \"07d1142b295ce6420ee40c3f07c29283dc4e7638452bef0406bcd1620ba24539\" returns successfully" Jul 9 13:02:56.860893 containerd[1563]: time="2025-07-09T13:02:56.860782164Z" level=info msg="StartContainer for \"e8df827108bb54e35f961473bb51622db911562b4f4508f552e233d33821e8a6\" returns successfully" Jul 9 13:02:56.883073 containerd[1563]: time="2025-07-09T13:02:56.883020045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766fbd8c89-f5gnv,Uid:be0c468f-8558-4c7f-9123-1d550c90de18,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998\"" Jul 9 13:02:56.984761 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670). Jul 9 13:02:57.048644 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:02:57.050627 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:02:57.055273 systemd-logind[1538]: New session 18 of user core. Jul 9 13:02:57.065621 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 13:02:57.204399 sshd[5279]: Connection closed by 10.0.0.1 port 46670 Jul 9 13:02:57.204770 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Jul 9 13:02:57.210105 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:46670.service: Deactivated successfully. Jul 9 13:02:57.212261 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 13:02:57.213222 systemd-logind[1538]: Session 18 logged out. Waiting for processes to exit. Jul 9 13:02:57.214606 systemd-logind[1538]: Removed session 18. Jul 9 13:02:57.365201 kubelet[2695]: E0709 13:02:57.364866 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:57.660001 kubelet[2695]: I0709 13:02:57.659824 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68df4b8f94-glcsr" podStartSLOduration=68.659796745 podStartE2EDuration="1m8.659796745s" podCreationTimestamp="2025-07-09 13:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:02:57.643850027 +0000 UTC m=+82.671803841" watchObservedRunningTime="2025-07-09 13:02:57.659796745 +0000 UTC m=+82.687750528" Jul 9 13:02:57.817737 systemd-networkd[1485]: cali563da18f472: Gained IPv6LL Jul 9 13:02:58.073638 systemd-networkd[1485]: caliebb599d1111: Gained IPv6LL Jul 9 13:02:58.368705 kubelet[2695]: I0709 13:02:58.368573 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 13:02:58.369245 kubelet[2695]: E0709 13:02:58.368894 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:02:58.393558 systemd-networkd[1485]: cali63fc7a3f92a: Gained IPv6LL Jul 9 13:02:59.371154 kubelet[2695]: E0709 13:02:59.371106 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:03:00.298693 containerd[1563]: time="2025-07-09T13:03:00.298626896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:00.299478 containerd[1563]: time="2025-07-09T13:03:00.299444320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 9 13:03:00.300676 containerd[1563]: time="2025-07-09T13:03:00.300633124Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:00.302902 containerd[1563]: time="2025-07-09T13:03:00.302859485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:00.303564 containerd[1563]: time="2025-07-09T13:03:00.303526542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 6.825155978s" Jul 9 13:03:00.303602 containerd[1563]: time="2025-07-09T13:03:00.303563793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 9 13:03:00.304958 containerd[1563]: time="2025-07-09T13:03:00.304670922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 9 13:03:00.306260 containerd[1563]: time="2025-07-09T13:03:00.306199137Z" level=info msg="CreateContainer within sandbox \"68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 13:03:00.318720 containerd[1563]: time="2025-07-09T13:03:00.318649835Z" level=info msg="Container cde2ad1510d8a6337fe367a96928f1e90406013e0e976e7570c316a79693fa48: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:03:00.330489 containerd[1563]: time="2025-07-09T13:03:00.330448384Z" level=info msg="CreateContainer within sandbox \"68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cde2ad1510d8a6337fe367a96928f1e90406013e0e976e7570c316a79693fa48\"" Jul 9 13:03:00.331071 containerd[1563]: time="2025-07-09T13:03:00.331039456Z" level=info msg="StartContainer for \"cde2ad1510d8a6337fe367a96928f1e90406013e0e976e7570c316a79693fa48\"" Jul 9 13:03:00.332771 containerd[1563]: time="2025-07-09T13:03:00.332613387Z" level=info msg="connecting to shim cde2ad1510d8a6337fe367a96928f1e90406013e0e976e7570c316a79693fa48" address="unix:///run/containerd/s/445322a6bb92bceb579e193c33049c140b658e98fbac87685e5f5bd4974c7349" protocol=ttrpc version=3 Jul 9 13:03:00.365539 systemd[1]: Started cri-containerd-cde2ad1510d8a6337fe367a96928f1e90406013e0e976e7570c316a79693fa48.scope - libcontainer container cde2ad1510d8a6337fe367a96928f1e90406013e0e976e7570c316a79693fa48. Jul 9 13:03:00.413105 containerd[1563]: time="2025-07-09T13:03:00.413048922Z" level=info msg="StartContainer for \"cde2ad1510d8a6337fe367a96928f1e90406013e0e976e7570c316a79693fa48\" returns successfully" Jul 9 13:03:02.222212 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:46674.service - OpenSSH per-connection server daemon (10.0.0.1:46674). Jul 9 13:03:02.289588 sshd[5337]: Accepted publickey for core from 10.0.0.1 port 46674 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:02.291683 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:02.296492 systemd-logind[1538]: New session 19 of user core. Jul 9 13:03:02.303507 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 13:03:02.592409 sshd[5340]: Connection closed by 10.0.0.1 port 46674 Jul 9 13:03:02.592749 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:02.597105 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:46674.service: Deactivated successfully. Jul 9 13:03:02.599321 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 13:03:02.600186 systemd-logind[1538]: Session 19 logged out. Waiting for processes to exit. Jul 9 13:03:02.601388 systemd-logind[1538]: Removed session 19. Jul 9 13:03:04.132345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055928725.mount: Deactivated successfully. Jul 9 13:03:05.152005 containerd[1563]: time="2025-07-09T13:03:05.151944158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:05.153387 containerd[1563]: time="2025-07-09T13:03:05.153308012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 9 13:03:05.154754 containerd[1563]: time="2025-07-09T13:03:05.154709838Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:05.157191 containerd[1563]: time="2025-07-09T13:03:05.157141151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:05.157816 containerd[1563]: time="2025-07-09T13:03:05.157764231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.85305776s" Jul 9 13:03:05.157816 containerd[1563]: time="2025-07-09T13:03:05.157811300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 9 13:03:05.158584 containerd[1563]: time="2025-07-09T13:03:05.158545020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 9 13:03:05.159920 containerd[1563]: time="2025-07-09T13:03:05.159882945Z" level=info msg="CreateContainer within sandbox \"dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 9 13:03:05.169268 containerd[1563]: time="2025-07-09T13:03:05.169195345Z" level=info msg="Container 13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:03:05.181368 containerd[1563]: time="2025-07-09T13:03:05.181215494Z" level=info msg="CreateContainer within sandbox \"dc076935ea61515f227d04170df71c2a1106097f5377c4e425ee5dfa4345e504\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8\"" Jul 9 13:03:05.182092 containerd[1563]: time="2025-07-09T13:03:05.182053655Z" level=info msg="StartContainer for \"13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8\"" Jul 9 13:03:05.183651 containerd[1563]: time="2025-07-09T13:03:05.183607742Z" level=info msg="connecting to shim 13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8" address="unix:///run/containerd/s/7f644fd70de3c127991e5b3b3a2f9567e743db6ba8b7aab2633fe3c7295b237e" protocol=ttrpc version=3 Jul 9 13:03:05.246690 systemd[1]: Started cri-containerd-13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8.scope - libcontainer container 13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8. Jul 9 13:03:05.448734 containerd[1563]: time="2025-07-09T13:03:05.448490504Z" level=info msg="StartContainer for \"13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8\" returns successfully" Jul 9 13:03:06.467049 kubelet[2695]: I0709 13:03:06.466970 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mw6rj" podStartSLOduration=85.466949142 podStartE2EDuration="1m25.466949142s" podCreationTimestamp="2025-07-09 13:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:02:57.661137895 +0000 UTC m=+82.689091668" watchObservedRunningTime="2025-07-09 13:03:06.466949142 +0000 UTC m=+91.494902925" Jul 9 13:03:06.468699 kubelet[2695]: I0709 13:03:06.467347 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-6p7m6" podStartSLOduration=63.947852813 podStartE2EDuration="1m15.467340369s" podCreationTimestamp="2025-07-09 13:01:51 +0000 UTC" firstStartedPulling="2025-07-09 13:02:53.638920033 +0000 UTC m=+78.666873816" lastFinishedPulling="2025-07-09 13:03:05.158407588 +0000 UTC m=+90.186361372" observedRunningTime="2025-07-09 13:03:06.466536164 +0000 UTC m=+91.494489947" watchObservedRunningTime="2025-07-09 13:03:06.467340369 +0000 UTC m=+91.495294152" Jul 9 13:03:06.545590 containerd[1563]: time="2025-07-09T13:03:06.545528647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8\" id:\"91b2a030f2bef1782bc1ee71cca3b4510e0001d52e2d366d291b2b8a6bd30882\" pid:5420 exit_status:1 exited_at:{seconds:1752066186 nanos:544980470}" Jul 9 13:03:07.536601 containerd[1563]: time="2025-07-09T13:03:07.536534307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8\" id:\"8467d942ef10638f2518f82460479e41b240c2ece2ef126daa072d621cf81d38\" pid:5448 exit_status:1 exited_at:{seconds:1752066187 nanos:536184289}" Jul 9 13:03:07.614263 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:46368.service - OpenSSH per-connection server daemon (10.0.0.1:46368). Jul 9 13:03:07.684903 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 46368 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:07.686918 sshd-session[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:07.692329 systemd-logind[1538]: New session 20 of user core. Jul 9 13:03:07.703539 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 13:03:07.830065 sshd[5464]: Connection closed by 10.0.0.1 port 46368 Jul 9 13:03:07.830560 sshd-session[5461]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:07.840463 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:46368.service: Deactivated successfully. Jul 9 13:03:07.842763 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 13:03:07.843618 systemd-logind[1538]: Session 20 logged out. Waiting for processes to exit. Jul 9 13:03:07.847603 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:46372.service - OpenSSH per-connection server daemon (10.0.0.1:46372). Jul 9 13:03:07.848286 systemd-logind[1538]: Removed session 20. Jul 9 13:03:07.908525 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 46372 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:07.910500 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:07.915429 systemd-logind[1538]: New session 21 of user core. Jul 9 13:03:07.927585 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 13:03:08.094254 kubelet[2695]: E0709 13:03:08.094069 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:03:08.229221 sshd[5481]: Connection closed by 10.0.0.1 port 46372 Jul 9 13:03:08.229876 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:08.245270 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:46372.service: Deactivated successfully. Jul 9 13:03:08.247393 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 13:03:08.248407 systemd-logind[1538]: Session 21 logged out. Waiting for processes to exit. Jul 9 13:03:08.251238 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:46382.service - OpenSSH per-connection server daemon (10.0.0.1:46382). Jul 9 13:03:08.252513 systemd-logind[1538]: Removed session 21. Jul 9 13:03:08.317073 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 46382 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:08.318887 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:08.324113 systemd-logind[1538]: New session 22 of user core. Jul 9 13:03:08.332545 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 13:03:09.669617 containerd[1563]: time="2025-07-09T13:03:09.669569080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8\" id:\"e6f09d1f078fd8c054bff6caa4af8e9384455e5d14c42fe49b0cd94211db7048\" pid:5519 exited_at:{seconds:1752066189 nanos:669094364}" Jul 9 13:03:10.388803 sshd[5496]: Connection closed by 10.0.0.1 port 46382 Jul 9 13:03:10.390780 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:10.402739 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:46382.service: Deactivated successfully. Jul 9 13:03:10.406047 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 13:03:10.406522 systemd[1]: session-22.scope: Consumed 622ms CPU time, 72.1M memory peak. Jul 9 13:03:10.407203 systemd-logind[1538]: Session 22 logged out. Waiting for processes to exit. Jul 9 13:03:10.411091 systemd-logind[1538]: Removed session 22. Jul 9 13:03:10.415799 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:46394.service - OpenSSH per-connection server daemon (10.0.0.1:46394). Jul 9 13:03:10.470336 sshd[5540]: Accepted publickey for core from 10.0.0.1 port 46394 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:10.471848 sshd-session[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:10.476572 systemd-logind[1538]: New session 23 of user core. Jul 9 13:03:10.492547 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 13:03:10.711767 sshd[5543]: Connection closed by 10.0.0.1 port 46394 Jul 9 13:03:10.712626 sshd-session[5540]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:10.727102 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:46394.service: Deactivated successfully. Jul 9 13:03:10.729891 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 13:03:10.730918 systemd-logind[1538]: Session 23 logged out. Waiting for processes to exit. Jul 9 13:03:10.734321 systemd[1]: Started sshd@23-10.0.0.14:22-10.0.0.1:46408.service - OpenSSH per-connection server daemon (10.0.0.1:46408). Jul 9 13:03:10.735338 systemd-logind[1538]: Removed session 23. Jul 9 13:03:10.796974 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 46408 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:10.798688 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:10.803220 systemd-logind[1538]: New session 24 of user core. Jul 9 13:03:10.810561 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 13:03:10.918716 sshd[5557]: Connection closed by 10.0.0.1 port 46408 Jul 9 13:03:10.919161 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:10.924352 systemd[1]: sshd@23-10.0.0.14:22-10.0.0.1:46408.service: Deactivated successfully. Jul 9 13:03:10.926665 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 13:03:10.927414 systemd-logind[1538]: Session 24 logged out. Waiting for processes to exit. Jul 9 13:03:10.928611 systemd-logind[1538]: Removed session 24. Jul 9 13:03:12.001776 kubelet[2695]: I0709 13:03:12.001719 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 13:03:13.608536 containerd[1563]: time="2025-07-09T13:03:13.608462724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:13.609669 containerd[1563]: time="2025-07-09T13:03:13.609614928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 9 13:03:13.611564 containerd[1563]: time="2025-07-09T13:03:13.611468084Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:13.614073 containerd[1563]: time="2025-07-09T13:03:13.614025582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:13.614617 containerd[1563]: time="2025-07-09T13:03:13.614590558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 8.456016592s" Jul 9 13:03:13.614667 containerd[1563]: time="2025-07-09T13:03:13.614622559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 9 13:03:13.616179 containerd[1563]: time="2025-07-09T13:03:13.615762499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 13:03:13.628750 containerd[1563]: time="2025-07-09T13:03:13.628680386Z" level=info msg="CreateContainer within sandbox \"5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 9 13:03:13.649098 containerd[1563]: time="2025-07-09T13:03:13.648966990Z" level=info msg="Container 0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:03:13.655731 containerd[1563]: time="2025-07-09T13:03:13.655684286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47bfc1d85e07abec5b98ded573f4f6af28718c30f5eadbadcc47b52cb90fdefc\" id:\"0f3ba49f6a69e3d04232c157ee6913b26a4d163bb7bc7950822ad67d1fb8a247\" pid:5590 exited_at:{seconds:1752066193 nanos:655235831}" Jul 9 13:03:13.660068 containerd[1563]: time="2025-07-09T13:03:13.660015790Z" level=info msg="CreateContainer within sandbox \"5ac08be41b3a42532512f870f7a880fd73541636ad243316f2723db12cbff998\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b\"" Jul 9 13:03:13.660746 containerd[1563]: time="2025-07-09T13:03:13.660721913Z" level=info msg="StartContainer for \"0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b\"" Jul 9 13:03:13.662186 containerd[1563]: time="2025-07-09T13:03:13.662145653Z" level=info msg="connecting to shim 0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b" address="unix:///run/containerd/s/7eb5794b41ce1d3b2aed0a46de72b7ac07a8e717ba4212abc637d530ec26ed97" protocol=ttrpc version=3 Jul 9 13:03:13.690589 systemd[1]: Started cri-containerd-0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b.scope - libcontainer container 0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b. Jul 9 13:03:13.766088 containerd[1563]: time="2025-07-09T13:03:13.766016892Z" level=info msg="StartContainer for \"0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b\" returns successfully" Jul 9 13:03:14.515651 containerd[1563]: time="2025-07-09T13:03:14.515606499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b\" id:\"bb12602b863cf4634a6e8765457770ffd7d0cf4f4b80dc74d0ae091edf042456\" pid:5663 exited_at:{seconds:1752066194 nanos:515310486}" Jul 9 13:03:14.559274 kubelet[2695]: I0709 13:03:14.559198 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-766fbd8c89-f5gnv" podStartSLOduration=65.828352153 podStartE2EDuration="1m22.559175219s" podCreationTimestamp="2025-07-09 13:01:52 +0000 UTC" firstStartedPulling="2025-07-09 13:02:56.884689787 +0000 UTC m=+81.912643570" lastFinishedPulling="2025-07-09 13:03:13.615512853 +0000 UTC m=+98.643466636" observedRunningTime="2025-07-09 13:03:14.558486389 +0000 UTC m=+99.586440172" watchObservedRunningTime="2025-07-09 13:03:14.559175219 +0000 UTC m=+99.587129002" Jul 9 13:03:15.933318 systemd[1]: Started sshd@24-10.0.0.14:22-10.0.0.1:39282.service - OpenSSH per-connection server daemon (10.0.0.1:39282). Jul 9 13:03:16.011410 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 39282 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:16.013402 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:16.019981 systemd-logind[1538]: New session 25 of user core. Jul 9 13:03:16.027539 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 13:03:16.197073 sshd[5677]: Connection closed by 10.0.0.1 port 39282 Jul 9 13:03:16.197941 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:16.203617 systemd[1]: sshd@24-10.0.0.14:22-10.0.0.1:39282.service: Deactivated successfully. Jul 9 13:03:16.205993 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 13:03:16.206771 systemd-logind[1538]: Session 25 logged out. Waiting for processes to exit. Jul 9 13:03:16.208232 systemd-logind[1538]: Removed session 25. Jul 9 13:03:18.656260 containerd[1563]: time="2025-07-09T13:03:18.656175805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:18.657054 containerd[1563]: time="2025-07-09T13:03:18.656988530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 9 13:03:18.658266 containerd[1563]: time="2025-07-09T13:03:18.658217185Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:18.660560 containerd[1563]: time="2025-07-09T13:03:18.660518388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:03:18.661294 containerd[1563]: time="2025-07-09T13:03:18.661250459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 5.0454504s" Jul 9 13:03:18.661294 containerd[1563]: time="2025-07-09T13:03:18.661283442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 9 13:03:18.663805 containerd[1563]: time="2025-07-09T13:03:18.663296558Z" level=info msg="CreateContainer within sandbox \"68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 13:03:18.671539 containerd[1563]: time="2025-07-09T13:03:18.671502673Z" level=info msg="Container 95dddde3eb5e5ca91da9871f91f84fff07aacadaa02995326537123260bae246: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:03:18.683973 containerd[1563]: time="2025-07-09T13:03:18.683823697Z" level=info msg="CreateContainer within sandbox \"68219085944522a54af15e80a34cce31c8929db2a1206898830ab917093e9dca\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"95dddde3eb5e5ca91da9871f91f84fff07aacadaa02995326537123260bae246\"" Jul 9 13:03:18.684554 containerd[1563]: time="2025-07-09T13:03:18.684520522Z" level=info msg="StartContainer for \"95dddde3eb5e5ca91da9871f91f84fff07aacadaa02995326537123260bae246\"" Jul 9 13:03:18.686577 containerd[1563]: time="2025-07-09T13:03:18.686533888Z" level=info msg="connecting to shim 95dddde3eb5e5ca91da9871f91f84fff07aacadaa02995326537123260bae246" address="unix:///run/containerd/s/445322a6bb92bceb579e193c33049c140b658e98fbac87685e5f5bd4974c7349" protocol=ttrpc version=3 Jul 9 13:03:18.714653 systemd[1]: Started cri-containerd-95dddde3eb5e5ca91da9871f91f84fff07aacadaa02995326537123260bae246.scope - libcontainer container 95dddde3eb5e5ca91da9871f91f84fff07aacadaa02995326537123260bae246. Jul 9 13:03:18.763030 containerd[1563]: time="2025-07-09T13:03:18.762957033Z" level=info msg="StartContainer for \"95dddde3eb5e5ca91da9871f91f84fff07aacadaa02995326537123260bae246\" returns successfully" Jul 9 13:03:19.231602 kubelet[2695]: I0709 13:03:19.231552 2695 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 13:03:19.232821 kubelet[2695]: I0709 13:03:19.231619 2695 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 13:03:19.499349 kubelet[2695]: I0709 13:03:19.499170 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x4tjz" podStartSLOduration=60.207022315 podStartE2EDuration="1m27.499144792s" podCreationTimestamp="2025-07-09 13:01:52 +0000 UTC" firstStartedPulling="2025-07-09 13:02:51.369950005 +0000 UTC m=+76.397903788" lastFinishedPulling="2025-07-09 13:03:18.662072481 +0000 UTC m=+103.690026265" observedRunningTime="2025-07-09 13:03:19.497910347 +0000 UTC m=+104.525864130" watchObservedRunningTime="2025-07-09 13:03:19.499144792 +0000 UTC m=+104.527098575" Jul 9 13:03:21.219473 systemd[1]: Started sshd@25-10.0.0.14:22-10.0.0.1:39292.service - OpenSSH per-connection server daemon (10.0.0.1:39292). Jul 9 13:03:21.298501 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 39292 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:21.300547 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:21.307247 systemd-logind[1538]: New session 26 of user core. Jul 9 13:03:21.314512 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 13:03:21.525014 sshd[5737]: Connection closed by 10.0.0.1 port 39292 Jul 9 13:03:21.525433 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:21.533416 systemd[1]: sshd@25-10.0.0.14:22-10.0.0.1:39292.service: Deactivated successfully. Jul 9 13:03:21.536026 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 13:03:21.537774 systemd-logind[1538]: Session 26 logged out. Waiting for processes to exit. Jul 9 13:03:21.538952 systemd-logind[1538]: Removed session 26. Jul 9 13:03:21.694184 kubelet[2695]: I0709 13:03:21.694120 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 13:03:25.452821 containerd[1563]: time="2025-07-09T13:03:25.452765700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d3702d161c899ce7470c7de3a48626193a9821f6471a073b13f8797d63c545b\" id:\"a604afedf742f7e18adad1eaa13e117f6885142ba6d6bfc410c24f5ffcdd5da5\" pid:5772 exited_at:{seconds:1752066205 nanos:452230354}" Jul 9 13:03:25.508715 containerd[1563]: time="2025-07-09T13:03:25.508657023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13622ed7551121985b2fb9b2d09d4dbd8ef786b791008f6aab516631b960f9c8\" id:\"bdc061e25dd920a06bf8d497f050de3398b78d31f94e30b4c9c8ba4c40300f48\" pid:5790 exited_at:{seconds:1752066205 nanos:508143900}" Jul 9 13:03:26.539291 systemd[1]: Started sshd@26-10.0.0.14:22-10.0.0.1:40108.service - OpenSSH per-connection server daemon (10.0.0.1:40108). Jul 9 13:03:26.601274 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 40108 ssh2: RSA SHA256:Ehsv9iPAmIJbEnlorOi35d2Kryfd05fXf88yv2g5tlI Jul 9 13:03:26.603083 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:03:26.607937 systemd-logind[1538]: New session 27 of user core. Jul 9 13:03:26.614557 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 13:03:26.778403 sshd[5811]: Connection closed by 10.0.0.1 port 40108 Jul 9 13:03:26.779119 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Jul 9 13:03:26.787817 systemd[1]: sshd@26-10.0.0.14:22-10.0.0.1:40108.service: Deactivated successfully. Jul 9 13:03:26.790865 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 13:03:26.792006 systemd-logind[1538]: Session 27 logged out. Waiting for processes to exit. Jul 9 13:03:26.793620 systemd-logind[1538]: Removed session 27.