Jul 15 05:11:33.839491 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:11:33.839526 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:11:33.839539 kernel: BIOS-provided physical RAM map: Jul 15 05:11:33.839549 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 05:11:33.839558 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 05:11:33.839567 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 05:11:33.839595 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 05:11:33.839608 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 05:11:33.839622 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 05:11:33.839631 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 05:11:33.839640 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 05:11:33.839649 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 05:11:33.839658 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 05:11:33.839667 kernel: NX (Execute Disable) protection: active Jul 15 05:11:33.839682 kernel: APIC: Static calls initialized Jul 15 05:11:33.839691 kernel: SMBIOS 2.8 present. Jul 15 05:11:33.839706 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 05:11:33.839715 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:11:33.839725 kernel: Hypervisor detected: KVM Jul 15 05:11:33.839735 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:11:33.839745 kernel: kvm-clock: using sched offset of 4401216260 cycles Jul 15 05:11:33.839756 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:11:33.839765 kernel: tsc: Detected 2794.750 MHz processor Jul 15 05:11:33.839779 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:11:33.839789 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:11:33.839800 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 05:11:33.839810 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 05:11:33.839820 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:11:33.839831 kernel: Using GB pages for direct mapping Jul 15 05:11:33.839841 kernel: ACPI: Early table checksum verification disabled Jul 15 05:11:33.839851 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 05:11:33.839861 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:11:33.839886 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:11:33.839897 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:11:33.839907 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 05:11:33.839917 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:11:33.839926 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:11:33.839936 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:11:33.839946 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:11:33.839957 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 05:11:33.839975 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 05:11:33.839986 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 05:11:33.839996 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 05:11:33.840007 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 05:11:33.840017 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 05:11:33.840027 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 05:11:33.840041 kernel: No NUMA configuration found Jul 15 05:11:33.840052 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 05:11:33.840062 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 15 05:11:33.840072 kernel: Zone ranges: Jul 15 05:11:33.840083 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:11:33.840094 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 05:11:33.840104 kernel: Normal empty Jul 15 05:11:33.840115 kernel: Device empty Jul 15 05:11:33.840125 kernel: Movable zone start for each node Jul 15 05:11:33.840135 kernel: Early memory node ranges Jul 15 05:11:33.840150 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 05:11:33.840160 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 05:11:33.840170 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 05:11:33.840181 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:11:33.840191 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 05:11:33.840201 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 05:11:33.840211 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 05:11:33.840226 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:11:33.840237 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 05:11:33.840250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 05:11:33.840260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:11:33.840273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:11:33.840283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:11:33.840294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:11:33.840304 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:11:33.840314 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:11:33.840324 kernel: TSC deadline timer available Jul 15 05:11:33.840334 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:11:33.840347 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:11:33.840357 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:11:33.840367 kernel: CPU topo: Max. threads per core: 1 Jul 15 05:11:33.840377 kernel: CPU topo: Num. cores per package: 4 Jul 15 05:11:33.840388 kernel: CPU topo: Num. threads per package: 4 Jul 15 05:11:33.840400 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 15 05:11:33.840411 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:11:33.840427 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 05:11:33.840438 kernel: kvm-guest: setup PV sched yield Jul 15 05:11:33.840448 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 05:11:33.840462 kernel: Booting paravirtualized kernel on KVM Jul 15 05:11:33.840472 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:11:33.840483 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 15 05:11:33.840493 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 15 05:11:33.840504 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 15 05:11:33.840514 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 05:11:33.840524 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:11:33.840534 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:11:33.840547 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:11:33.840561 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:11:33.840598 kernel: random: crng init done Jul 15 05:11:33.840609 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 05:11:33.840620 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:11:33.840630 kernel: Fallback order for Node 0: 0 Jul 15 05:11:33.840640 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 15 05:11:33.840650 kernel: Policy zone: DMA32 Jul 15 05:11:33.840658 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:11:33.840669 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 05:11:33.840677 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:11:33.840685 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:11:33.840692 kernel: Dynamic Preempt: voluntary Jul 15 05:11:33.840701 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:11:33.840712 kernel: rcu: RCU event tracing is enabled. Jul 15 05:11:33.840723 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 05:11:33.840733 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:11:33.840746 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:11:33.840757 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:11:33.840764 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:11:33.840772 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 05:11:33.840780 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:11:33.840788 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:11:33.840795 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:11:33.840803 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 05:11:33.840811 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:11:33.840827 kernel: Console: colour VGA+ 80x25 Jul 15 05:11:33.840835 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:11:33.840843 kernel: ACPI: Core revision 20240827 Jul 15 05:11:33.840851 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 05:11:33.840862 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:11:33.840881 kernel: x2apic enabled Jul 15 05:11:33.840889 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:11:33.840900 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 05:11:33.840908 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 05:11:33.840918 kernel: kvm-guest: setup PV IPIs Jul 15 05:11:33.840926 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 05:11:33.840935 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 05:11:33.840943 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 05:11:33.840951 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 05:11:33.840960 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 05:11:33.840975 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 05:11:33.840989 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:11:33.841005 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:11:33.841032 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:11:33.841044 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 05:11:33.841056 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 05:11:33.841081 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 05:11:33.841115 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 05:11:33.841148 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 05:11:33.841163 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 05:11:33.841175 kernel: x86/bugs: return thunk changed Jul 15 05:11:33.841204 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 05:11:33.841238 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:11:33.841260 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:11:33.841271 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:11:33.841282 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:11:33.841291 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 05:11:33.841299 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:11:33.841307 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:11:33.841315 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:11:33.841327 kernel: landlock: Up and running. Jul 15 05:11:33.841335 kernel: SELinux: Initializing. Jul 15 05:11:33.841344 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:11:33.841355 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:11:33.841364 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 05:11:33.841372 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 05:11:33.841379 kernel: ... version: 0 Jul 15 05:11:33.841387 kernel: ... bit width: 48 Jul 15 05:11:33.841395 kernel: ... generic registers: 6 Jul 15 05:11:33.841405 kernel: ... value mask: 0000ffffffffffff Jul 15 05:11:33.841413 kernel: ... max period: 00007fffffffffff Jul 15 05:11:33.841421 kernel: ... fixed-purpose events: 0 Jul 15 05:11:33.841429 kernel: ... event mask: 000000000000003f Jul 15 05:11:33.841437 kernel: signal: max sigframe size: 1776 Jul 15 05:11:33.841445 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:11:33.841453 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:11:33.841462 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:11:33.841470 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:11:33.841480 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:11:33.841488 kernel: .... node #0, CPUs: #1 #2 #3 Jul 15 05:11:33.841496 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 05:11:33.841504 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 05:11:33.841513 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 136904K reserved, 0K cma-reserved) Jul 15 05:11:33.841521 kernel: devtmpfs: initialized Jul 15 05:11:33.841528 kernel: x86/mm: Memory block size: 128MB Jul 15 05:11:33.841536 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:11:33.841544 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 05:11:33.841555 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:11:33.841563 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:11:33.841594 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:11:33.841602 kernel: audit: type=2000 audit(1752556290.899:1): state=initialized audit_enabled=0 res=1 Jul 15 05:11:33.841610 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:11:33.841618 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:11:33.841626 kernel: cpuidle: using governor menu Jul 15 05:11:33.841634 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:11:33.841642 kernel: dca service started, version 1.12.1 Jul 15 05:11:33.841654 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 05:11:33.841664 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 05:11:33.841675 kernel: PCI: Using configuration type 1 for base access Jul 15 05:11:33.841686 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:11:33.841697 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:11:33.841709 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:11:33.841720 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:11:33.841731 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:11:33.841746 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:11:33.841758 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:11:33.841770 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:11:33.841782 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 05:11:33.841793 kernel: ACPI: Interpreter enabled Jul 15 05:11:33.841805 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 05:11:33.841816 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:11:33.841828 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:11:33.841840 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:11:33.841852 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 05:11:33.841879 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:11:33.842113 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:11:33.842256 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 05:11:33.842464 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 05:11:33.842484 kernel: PCI host bridge to bus 0000:00 Jul 15 05:11:33.842752 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:11:33.842927 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:11:33.843087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:11:33.843223 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 05:11:33.843340 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 05:11:33.843452 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 05:11:33.843604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:11:33.843858 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:11:33.844125 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:11:33.844311 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 05:11:33.844444 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 05:11:33.844598 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 05:11:33.844741 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:11:33.844988 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 05:11:33.845169 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 15 05:11:33.845333 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 05:11:33.845495 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 05:11:33.845705 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 15 05:11:33.845879 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 15 05:11:33.846045 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 05:11:33.846206 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 05:11:33.846390 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 05:11:33.846617 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 15 05:11:33.846776 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 15 05:11:33.846951 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 05:11:33.847111 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 05:11:33.847295 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:11:33.847461 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 05:11:33.847683 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 05:11:33.847851 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 15 05:11:33.848029 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 15 05:11:33.848222 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 05:11:33.848390 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 05:11:33.848407 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:11:33.848419 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:11:33.848439 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:11:33.848453 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:11:33.848467 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 05:11:33.848481 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 05:11:33.848495 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 05:11:33.848508 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 05:11:33.848522 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 05:11:33.848536 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 05:11:33.848550 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 05:11:33.848603 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 05:11:33.848618 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 05:11:33.848629 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 05:11:33.848637 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 05:11:33.848645 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 05:11:33.848653 kernel: iommu: Default domain type: Translated Jul 15 05:11:33.848662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:11:33.848670 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:11:33.848678 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:11:33.848690 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 05:11:33.848698 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 05:11:33.848842 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 05:11:33.849014 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 05:11:33.849854 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:11:33.849889 kernel: vgaarb: loaded Jul 15 05:11:33.849902 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 05:11:33.849913 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 05:11:33.849930 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:11:33.849941 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:11:33.849952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:11:33.849963 kernel: pnp: PnP ACPI init Jul 15 05:11:33.850163 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 05:11:33.850182 kernel: pnp: PnP ACPI: found 6 devices Jul 15 05:11:33.850194 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:11:33.850205 kernel: NET: Registered PF_INET protocol family Jul 15 05:11:33.850221 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 05:11:33.850232 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 05:11:33.850243 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:11:33.850254 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:11:33.850266 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 05:11:33.850277 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 05:11:33.850288 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:11:33.850299 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:11:33.850309 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:11:33.850324 kernel: NET: Registered PF_XDP protocol family Jul 15 05:11:33.850472 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:11:33.850645 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:11:33.850802 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:11:33.850959 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 05:11:33.851077 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 05:11:33.851214 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 05:11:33.851231 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:11:33.851243 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 05:11:33.851261 kernel: Initialise system trusted keyrings Jul 15 05:11:33.851272 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 05:11:33.851283 kernel: Key type asymmetric registered Jul 15 05:11:33.851294 kernel: Asymmetric key parser 'x509' registered Jul 15 05:11:33.851305 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:11:33.851316 kernel: io scheduler mq-deadline registered Jul 15 05:11:33.851327 kernel: io scheduler kyber registered Jul 15 05:11:33.851337 kernel: io scheduler bfq registered Jul 15 05:11:33.851348 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:11:33.851362 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 05:11:33.851371 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 05:11:33.851379 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 05:11:33.851388 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:11:33.851396 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:11:33.851405 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:11:33.851413 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:11:33.851421 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:11:33.851429 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 05:11:33.851604 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 05:11:33.851753 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 05:11:33.851910 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T05:11:33 UTC (1752556293) Jul 15 05:11:33.852031 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 05:11:33.852041 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 05:11:33.852050 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:11:33.852058 kernel: Segment Routing with IPv6 Jul 15 05:11:33.852072 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:11:33.852080 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:11:33.852091 kernel: Key type dns_resolver registered Jul 15 05:11:33.852102 kernel: IPI shorthand broadcast: enabled Jul 15 05:11:33.852114 kernel: sched_clock: Marking stable (3170006156, 115215055)->(3307955244, -22734033) Jul 15 05:11:33.852125 kernel: registered taskstats version 1 Jul 15 05:11:33.852136 kernel: Loading compiled-in X.509 certificates Jul 15 05:11:33.852147 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:11:33.852158 kernel: Demotion targets for Node 0: null Jul 15 05:11:33.852168 kernel: Key type .fscrypt registered Jul 15 05:11:33.852183 kernel: Key type fscrypt-provisioning registered Jul 15 05:11:33.852195 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:11:33.852206 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:11:33.852217 kernel: ima: No architecture policies found Jul 15 05:11:33.852227 kernel: clk: Disabling unused clocks Jul 15 05:11:33.852238 kernel: Warning: unable to open an initial console. Jul 15 05:11:33.852249 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:11:33.852260 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:11:33.852273 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:11:33.852281 kernel: Run /init as init process Jul 15 05:11:33.852289 kernel: with arguments: Jul 15 05:11:33.852297 kernel: /init Jul 15 05:11:33.852305 kernel: with environment: Jul 15 05:11:33.852313 kernel: HOME=/ Jul 15 05:11:33.852321 kernel: TERM=linux Jul 15 05:11:33.852329 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:11:33.852338 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:11:33.852353 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:11:33.852375 systemd[1]: Detected virtualization kvm. Jul 15 05:11:33.852384 systemd[1]: Detected architecture x86-64. Jul 15 05:11:33.852392 systemd[1]: Running in initrd. Jul 15 05:11:33.852401 systemd[1]: No hostname configured, using default hostname. Jul 15 05:11:33.852413 systemd[1]: Hostname set to . Jul 15 05:11:33.852421 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:11:33.852430 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:11:33.852439 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:11:33.852448 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:11:33.852458 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:11:33.852467 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:11:33.852476 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:11:33.852488 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:11:33.852498 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:11:33.852508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:11:33.852516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:11:33.852526 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:11:33.852537 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:11:33.852550 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:11:33.852565 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:11:33.852628 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:11:33.852640 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:11:33.852653 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:11:33.852665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:11:33.852677 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:11:33.852689 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:11:33.852702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:11:33.852714 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:11:33.852730 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:11:33.852743 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:11:33.852755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:11:33.852767 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:11:33.852780 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:11:33.852799 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:11:33.852811 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:11:33.852824 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:11:33.852836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:11:33.852848 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:11:33.852861 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:11:33.852888 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:11:33.852901 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:11:33.852947 systemd-journald[220]: Collecting audit messages is disabled. Jul 15 05:11:33.852981 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:11:33.852993 systemd-journald[220]: Journal started Jul 15 05:11:33.853019 systemd-journald[220]: Runtime Journal (/run/log/journal/317a30269a444832a7cdd19947564ec5) is 6M, max 48.6M, 42.5M free. Jul 15 05:11:33.842772 systemd-modules-load[221]: Inserted module 'overlay' Jul 15 05:11:33.855553 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:11:33.859766 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:11:33.897964 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:11:33.897996 kernel: Bridge firewalling registered Jul 15 05:11:33.863748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:11:33.872695 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 15 05:11:33.903947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:11:33.907731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:11:33.911858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:11:33.913169 systemd-tmpfiles[240]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:11:33.916750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:11:33.926918 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:11:33.927818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:11:33.942969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:11:33.946239 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:11:33.970780 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:11:33.972726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:11:34.005604 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:11:34.010529 systemd-resolved[255]: Positive Trust Anchors: Jul 15 05:11:34.010538 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:11:34.010591 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:11:34.013154 systemd-resolved[255]: Defaulting to hostname 'linux'. Jul 15 05:11:34.014721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:11:34.022139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:11:34.156613 kernel: SCSI subsystem initialized Jul 15 05:11:34.166614 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:11:34.177601 kernel: iscsi: registered transport (tcp) Jul 15 05:11:34.213971 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:11:34.214048 kernel: QLogic iSCSI HBA Driver Jul 15 05:11:34.240121 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:11:34.272260 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:11:34.274427 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:11:34.346447 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:11:34.348636 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:11:34.410634 kernel: raid6: avx2x4 gen() 29334 MB/s Jul 15 05:11:34.427603 kernel: raid6: avx2x2 gen() 30524 MB/s Jul 15 05:11:34.444733 kernel: raid6: avx2x1 gen() 24464 MB/s Jul 15 05:11:34.444766 kernel: raid6: using algorithm avx2x2 gen() 30524 MB/s Jul 15 05:11:34.462693 kernel: raid6: .... xor() 18112 MB/s, rmw enabled Jul 15 05:11:34.462717 kernel: raid6: using avx2x2 recovery algorithm Jul 15 05:11:34.483611 kernel: xor: automatically using best checksumming function avx Jul 15 05:11:34.658616 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:11:34.667522 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:11:34.669744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:11:34.715318 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 15 05:11:34.722810 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:11:34.727696 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:11:34.763458 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jul 15 05:11:34.795004 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:11:34.798767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:11:34.878519 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:11:34.882963 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:11:34.935611 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 15 05:11:34.942941 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 05:11:34.950927 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 05:11:34.950957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:11:34.950969 kernel: GPT:9289727 != 19775487 Jul 15 05:11:34.950987 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:11:34.950997 kernel: GPT:9289727 != 19775487 Jul 15 05:11:34.951007 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:11:34.951018 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:11:34.956611 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:11:34.962595 kernel: libata version 3.00 loaded. Jul 15 05:11:34.968596 kernel: AES CTR mode by8 optimization enabled Jul 15 05:11:34.973613 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 05:11:34.973904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:11:34.978046 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 05:11:34.978066 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 05:11:34.978259 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 05:11:34.974155 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:11:34.981288 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 05:11:34.981682 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:11:34.987323 kernel: scsi host0: ahci Jul 15 05:11:34.985250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:11:34.992274 kernel: scsi host1: ahci Jul 15 05:11:34.992446 kernel: scsi host2: ahci Jul 15 05:11:34.993623 kernel: scsi host3: ahci Jul 15 05:11:34.993806 kernel: scsi host4: ahci Jul 15 05:11:34.993968 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:11:35.012601 kernel: scsi host5: ahci Jul 15 05:11:35.012858 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 15 05:11:35.012877 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 15 05:11:35.012892 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 15 05:11:35.012906 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 15 05:11:35.012920 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 15 05:11:35.012940 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 15 05:11:35.035488 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 05:11:35.072626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:11:35.094747 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 05:11:35.102056 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 05:11:35.102482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 05:11:35.111544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 05:11:35.114031 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:11:35.236113 disk-uuid[634]: Primary Header is updated. Jul 15 05:11:35.236113 disk-uuid[634]: Secondary Entries is updated. Jul 15 05:11:35.236113 disk-uuid[634]: Secondary Header is updated. Jul 15 05:11:35.240585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:11:35.245646 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:11:35.329249 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 05:11:35.329324 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 05:11:35.329340 kernel: ata3.00: applying bridge limits Jul 15 05:11:35.331556 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 05:11:35.331600 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 05:11:35.332644 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 05:11:35.332711 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 05:11:35.333605 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 05:11:35.334602 kernel: ata3.00: configured for UDMA/100 Jul 15 05:11:35.336606 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 05:11:35.389172 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 05:11:35.389613 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 05:11:35.401621 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 05:11:35.826429 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:11:36.135139 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:11:36.136937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:11:36.138443 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:11:36.140779 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:11:36.162428 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:11:36.289609 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:11:36.290020 disk-uuid[635]: The operation has completed successfully. Jul 15 05:11:36.326827 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:11:36.326956 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:11:36.375333 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:11:36.418335 sh[663]: Success Jul 15 05:11:36.483613 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:11:36.483692 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:11:36.485351 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:11:36.494602 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 05:11:36.527117 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:11:36.559500 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:11:36.575211 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:11:36.580598 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:11:36.583338 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (675) Jul 15 05:11:36.583361 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:11:36.583372 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:11:36.584982 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:11:36.589529 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:11:36.590430 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:11:36.591972 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:11:36.592890 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:11:36.595173 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:11:36.661491 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (706) Jul 15 05:11:36.661563 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:11:36.661602 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:11:36.663097 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:11:36.671623 kernel: BTRFS info (device vda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:11:36.671899 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:11:36.675652 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:11:36.736915 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:11:36.739893 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:11:36.762265 ignition[797]: Ignition 2.21.0 Jul 15 05:11:36.762276 ignition[797]: Stage: fetch-offline Jul 15 05:11:36.762308 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:11:36.762318 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:11:36.762411 ignition[797]: parsed url from cmdline: "" Jul 15 05:11:36.762415 ignition[797]: no config URL provided Jul 15 05:11:36.762420 ignition[797]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:11:36.762432 ignition[797]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:11:36.762460 ignition[797]: op(1): [started] loading QEMU firmware config module Jul 15 05:11:36.762466 ignition[797]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 05:11:36.771736 ignition[797]: op(1): [finished] loading QEMU firmware config module Jul 15 05:11:36.796722 systemd-networkd[849]: lo: Link UP Jul 15 05:11:36.796732 systemd-networkd[849]: lo: Gained carrier Jul 15 05:11:36.798677 systemd-networkd[849]: Enumeration completed Jul 15 05:11:36.799059 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:11:36.799064 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:11:36.800407 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:11:36.801001 systemd-networkd[849]: eth0: Link UP Jul 15 05:11:36.801005 systemd-networkd[849]: eth0: Gained carrier Jul 15 05:11:36.801014 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:11:36.810510 systemd[1]: Reached target network.target - Network. Jul 15 05:11:36.825130 ignition[797]: parsing config with SHA512: ceb5038b98a1c2d68b9faed2ed297b9a4878e08a6a3db56679a96ac815a1cd95434c19d175f08555234229447390f7160c8e938b216e6326e7163b6db251a5ef Jul 15 05:11:36.828489 unknown[797]: fetched base config from "system" Jul 15 05:11:36.829234 unknown[797]: fetched user config from "qemu" Jul 15 05:11:36.829610 ignition[797]: fetch-offline: fetch-offline passed Jul 15 05:11:36.829666 ignition[797]: Ignition finished successfully Jul 15 05:11:36.830320 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 05:11:36.834870 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:11:36.835535 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 05:11:36.839829 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:11:36.881612 ignition[857]: Ignition 2.21.0 Jul 15 05:11:36.881627 ignition[857]: Stage: kargs Jul 15 05:11:36.881762 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:11:36.881773 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:11:36.883025 ignition[857]: kargs: kargs passed Jul 15 05:11:36.883095 ignition[857]: Ignition finished successfully Jul 15 05:11:36.888407 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:11:36.890262 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:11:36.924864 ignition[865]: Ignition 2.21.0 Jul 15 05:11:36.924876 ignition[865]: Stage: disks Jul 15 05:11:36.927033 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:11:36.927409 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:11:36.929720 ignition[865]: disks: disks passed Jul 15 05:11:36.929801 ignition[865]: Ignition finished successfully Jul 15 05:11:36.932844 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:11:36.934964 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:11:36.935246 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:11:36.935597 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:11:36.936081 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:11:36.936429 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:11:36.947373 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:11:36.975726 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:11:37.161103 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:11:37.163193 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:11:37.299710 kernel: EXT4-fs (vda9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:11:37.300287 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:11:37.301505 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:11:37.305142 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:11:37.308310 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:11:37.317055 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:11:37.317130 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:11:37.317179 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:11:37.332016 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:11:37.334899 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:11:37.343486 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Jul 15 05:11:37.343525 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:11:37.343538 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:11:37.343550 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:11:37.348500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:11:37.413987 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:11:37.422355 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:11:37.426968 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:11:37.432005 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:11:37.543540 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:11:37.545804 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:11:37.547684 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:11:37.573605 kernel: BTRFS info (device vda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:11:37.580839 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:11:37.587248 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:11:37.605653 ignition[997]: INFO : Ignition 2.21.0 Jul 15 05:11:37.605653 ignition[997]: INFO : Stage: mount Jul 15 05:11:37.607597 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:11:37.607597 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:11:37.607597 ignition[997]: INFO : mount: mount passed Jul 15 05:11:37.607597 ignition[997]: INFO : Ignition finished successfully Jul 15 05:11:37.614309 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:11:37.615966 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:11:37.640464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:11:37.671599 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Jul 15 05:11:37.673734 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:11:37.673821 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:11:37.673835 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:11:37.678692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:11:37.720015 ignition[1026]: INFO : Ignition 2.21.0 Jul 15 05:11:37.720015 ignition[1026]: INFO : Stage: files Jul 15 05:11:37.723247 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:11:37.723247 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:11:37.723247 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:11:37.727159 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:11:37.727159 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:11:37.731614 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:11:37.733174 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:11:37.735027 unknown[1026]: wrote ssh authorized keys file for user: core Jul 15 05:11:37.736303 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:11:37.737905 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 05:11:37.737905 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 15 05:11:37.778203 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:11:38.036280 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 05:11:38.036280 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:11:38.040669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:11:38.040669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:11:38.040669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:11:38.040669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:11:38.040669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:11:38.040669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:11:38.040669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:11:38.053878 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:11:38.053878 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:11:38.053878 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:11:38.053878 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:11:38.053878 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:11:38.053878 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 15 05:11:38.353907 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 05:11:38.659161 systemd-networkd[849]: eth0: Gained IPv6LL Jul 15 05:11:38.741672 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:11:38.741672 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 05:11:38.746288 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:11:39.027793 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:11:39.027793 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 05:11:39.027793 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 05:11:39.027793 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 05:11:39.036336 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 05:11:39.036336 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 05:11:39.036336 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 05:11:39.050656 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 05:11:39.056405 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 05:11:39.057997 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 05:11:39.057997 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:11:39.057997 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:11:39.057997 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:11:39.057997 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:11:39.057997 ignition[1026]: INFO : files: files passed Jul 15 05:11:39.057997 ignition[1026]: INFO : Ignition finished successfully Jul 15 05:11:39.067612 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:11:39.070526 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:11:39.072883 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:11:39.095399 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:11:39.095549 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:11:39.098470 initrd-setup-root-after-ignition[1053]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 05:11:39.101183 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:11:39.101183 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:11:39.105136 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:11:39.108438 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:11:39.109116 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:11:39.112153 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:11:39.171791 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:11:39.171969 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:11:39.172796 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:11:39.173268 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:11:39.173899 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:11:39.175145 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:11:39.198306 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:11:39.201540 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:11:39.232595 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:11:39.233031 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:11:39.235329 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:11:39.235691 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:11:39.235857 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:11:39.236490 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:11:39.236997 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:11:39.237316 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:11:39.237665 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:11:39.238144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:11:39.238464 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:11:39.239006 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:11:39.239323 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:11:39.239680 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:11:39.240166 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:11:39.240511 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:11:39.241015 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:11:39.241154 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:11:39.265984 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:11:39.266543 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:11:39.267033 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:11:39.267166 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:11:39.271991 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:11:39.272157 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:11:39.278238 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:11:39.278415 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:11:39.279328 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:11:39.279612 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:11:39.283685 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:11:39.284433 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:11:39.284987 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:11:39.285349 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:11:39.285488 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:11:39.291727 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:11:39.291864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:11:39.292417 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:11:39.292614 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:11:39.295195 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:11:39.295346 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:11:39.299866 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:11:39.301187 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:11:39.303217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:11:39.303379 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:11:39.305135 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:11:39.305321 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:11:39.313810 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:11:39.313964 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:11:39.329349 ignition[1081]: INFO : Ignition 2.21.0 Jul 15 05:11:39.329349 ignition[1081]: INFO : Stage: umount Jul 15 05:11:39.331482 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:11:39.331482 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:11:39.331482 ignition[1081]: INFO : umount: umount passed Jul 15 05:11:39.331482 ignition[1081]: INFO : Ignition finished successfully Jul 15 05:11:39.335948 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:11:39.336091 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:11:39.339032 systemd[1]: Stopped target network.target - Network. Jul 15 05:11:39.339341 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:11:39.339420 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:11:39.341033 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:11:39.341083 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:11:39.343248 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:11:39.343304 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:11:39.343613 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:11:39.343659 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:11:39.344249 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:11:39.348670 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:11:39.351715 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:11:39.359557 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:11:39.359734 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:11:39.364481 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:11:39.364947 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:11:39.365017 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:11:39.370346 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:11:39.373264 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:11:39.373441 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:11:39.376813 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:11:39.377074 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:11:39.377363 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:11:39.377412 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:11:39.382303 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:11:39.382610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:11:39.382682 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:11:39.383035 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:11:39.383097 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:11:39.389919 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:11:39.389972 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:11:39.390366 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:11:39.391745 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:11:39.411096 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:11:39.416781 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:11:39.418007 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:11:39.418112 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:11:39.420203 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:11:39.420253 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:11:39.422483 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:11:39.422549 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:11:39.423416 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:11:39.423482 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:11:39.428885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:11:39.428945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:11:39.435257 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:11:39.435509 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:11:39.435566 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:11:39.439972 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:11:39.440027 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:11:39.443681 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 05:11:39.443745 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:11:39.447342 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:11:39.447396 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:11:39.448123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:11:39.448170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:11:39.453644 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:11:39.460831 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:11:39.471631 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:11:39.471805 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:11:39.579094 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:11:39.579300 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:11:39.580316 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:11:39.583840 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:11:39.583917 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:11:39.587072 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:11:39.616173 systemd[1]: Switching root. Jul 15 05:11:39.660731 systemd-journald[220]: Journal stopped Jul 15 05:11:41.047957 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 15 05:11:41.048024 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:11:41.048038 kernel: SELinux: policy capability open_perms=1 Jul 15 05:11:41.048052 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:11:41.048064 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:11:41.048076 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:11:41.048093 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:11:41.048105 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:11:41.048120 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:11:41.048131 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:11:41.048143 kernel: audit: type=1403 audit(1752556300.168:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:11:41.048156 systemd[1]: Successfully loaded SELinux policy in 61.041ms. Jul 15 05:11:41.048182 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.126ms. Jul 15 05:11:41.048195 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:11:41.048208 systemd[1]: Detected virtualization kvm. Jul 15 05:11:41.048220 systemd[1]: Detected architecture x86-64. Jul 15 05:11:41.048232 systemd[1]: Detected first boot. Jul 15 05:11:41.048247 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:11:41.048260 zram_generator::config[1127]: No configuration found. Jul 15 05:11:41.048279 kernel: Guest personality initialized and is inactive Jul 15 05:11:41.048290 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:11:41.048302 kernel: Initialized host personality Jul 15 05:11:41.048314 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:11:41.048326 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:11:41.048340 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:11:41.048355 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:11:41.048372 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:11:41.048385 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:11:41.048397 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:11:41.048409 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:11:41.048421 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:11:41.048434 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:11:41.048446 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:11:41.048459 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:11:41.048474 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:11:41.048486 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:11:41.048498 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:11:41.048511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:11:41.048523 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:11:41.048536 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:11:41.048549 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:11:41.048563 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:11:41.049000 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:11:41.049018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:11:41.049032 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:11:41.049044 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:11:41.049056 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:11:41.049075 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:11:41.049087 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:11:41.049100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:11:41.049112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:11:41.049128 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:11:41.049140 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:11:41.049155 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:11:41.049167 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:11:41.049179 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:11:41.049191 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:11:41.049204 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:11:41.049216 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:11:41.049228 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:11:41.049300 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:11:41.049365 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:11:41.049379 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:11:41.049391 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:11:41.049403 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:11:41.049418 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:11:41.049431 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:11:41.049444 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:11:41.049459 systemd[1]: Reached target machines.target - Containers. Jul 15 05:11:41.049472 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:11:41.049485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:11:41.049497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:11:41.049509 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:11:41.049522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:11:41.049534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:11:41.049546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:11:41.049558 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:11:41.049586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:11:41.049599 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:11:41.049612 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:11:41.049641 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:11:41.049653 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:11:41.049676 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:11:41.049689 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:11:41.049702 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:11:41.049717 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:11:41.049732 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:11:41.049744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:11:41.049798 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:11:41.049813 kernel: loop: module loaded Jul 15 05:11:41.049828 kernel: fuse: init (API version 7.41) Jul 15 05:11:41.049840 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:11:41.049852 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:11:41.049865 systemd[1]: Stopped verity-setup.service. Jul 15 05:11:41.049878 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:11:41.049890 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:11:41.049902 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:11:41.049914 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:11:41.049927 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:11:41.049941 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:11:41.049960 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:11:41.049997 systemd-journald[1198]: Collecting audit messages is disabled. Jul 15 05:11:41.050024 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:11:41.050042 systemd-journald[1198]: Journal started Jul 15 05:11:41.050064 systemd-journald[1198]: Runtime Journal (/run/log/journal/317a30269a444832a7cdd19947564ec5) is 6M, max 48.6M, 42.5M free. Jul 15 05:11:40.735157 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:11:40.756134 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 05:11:40.756739 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:11:41.051670 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:11:41.053650 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:11:41.055643 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:11:41.058975 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:11:41.061611 kernel: ACPI: bus type drm_connector registered Jul 15 05:11:41.061755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:11:41.062042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:11:41.063674 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:11:41.063970 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:11:41.065407 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:11:41.065749 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:11:41.067399 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:11:41.067633 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:11:41.069171 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:11:41.069394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:11:41.071090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:11:41.072610 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:11:41.074209 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:11:41.075844 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:11:41.093203 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:11:41.096345 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:11:41.099148 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:11:41.100870 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:11:41.100918 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:11:41.103499 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:11:41.108637 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:11:41.110041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:11:41.112007 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:11:41.116014 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:11:41.118307 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:11:41.121424 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:11:41.122928 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:11:41.126768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:11:41.130533 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:11:41.134598 systemd-journald[1198]: Time spent on flushing to /var/log/journal/317a30269a444832a7cdd19947564ec5 is 31.554ms for 975 entries. Jul 15 05:11:41.134598 systemd-journald[1198]: System Journal (/var/log/journal/317a30269a444832a7cdd19947564ec5) is 8M, max 195.6M, 187.6M free. Jul 15 05:11:41.176751 systemd-journald[1198]: Received client request to flush runtime journal. Jul 15 05:11:41.176849 kernel: loop0: detected capacity change from 0 to 224512 Jul 15 05:11:41.134922 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:11:41.139113 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:11:41.140851 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:11:41.160992 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:11:41.168180 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:11:41.171744 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:11:41.176464 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:11:41.180374 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:11:41.188281 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 15 05:11:41.188304 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 15 05:11:41.190243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:11:41.197772 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:11:41.201072 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:11:41.204760 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:11:41.219714 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:11:41.225610 kernel: loop1: detected capacity change from 0 to 146488 Jul 15 05:11:41.250934 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:11:41.254351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:11:41.264603 kernel: loop2: detected capacity change from 0 to 114000 Jul 15 05:11:41.286265 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 15 05:11:41.286739 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 15 05:11:41.294944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:11:41.306618 kernel: loop3: detected capacity change from 0 to 224512 Jul 15 05:11:41.320005 kernel: loop4: detected capacity change from 0 to 146488 Jul 15 05:11:41.334603 kernel: loop5: detected capacity change from 0 to 114000 Jul 15 05:11:41.343319 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 05:11:41.343969 (sd-merge)[1272]: Merged extensions into '/usr'. Jul 15 05:11:41.349853 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:11:41.349875 systemd[1]: Reloading... Jul 15 05:11:41.446614 zram_generator::config[1298]: No configuration found. Jul 15 05:11:41.529728 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:11:41.546743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:11:41.627873 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:11:41.628167 systemd[1]: Reloading finished in 277 ms. Jul 15 05:11:41.661678 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:11:41.719421 systemd[1]: Starting ensure-sysext.service... Jul 15 05:11:41.762887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:11:41.788347 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:11:41.821777 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:11:41.821945 systemd[1]: Reloading... Jul 15 05:11:41.829533 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:11:41.829605 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:11:41.829929 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:11:41.830189 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:11:41.831175 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:11:41.831463 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 15 05:11:41.831678 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 15 05:11:41.836181 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:11:41.836271 systemd-tmpfiles[1335]: Skipping /boot Jul 15 05:11:41.847390 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:11:41.847401 systemd-tmpfiles[1335]: Skipping /boot Jul 15 05:11:41.905690 zram_generator::config[1365]: No configuration found. Jul 15 05:11:42.069657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:11:42.153543 systemd[1]: Reloading finished in 330 ms. Jul 15 05:11:42.169017 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:11:42.191330 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:11:42.194183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:11:42.207843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:11:42.211688 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:11:42.214276 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:11:42.218007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:11:42.218309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:11:42.219734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:11:42.233090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:11:42.235886 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:11:42.237054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:11:42.237243 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:11:42.237402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:11:42.238965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:11:42.239200 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:11:42.244322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:11:42.244557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:11:42.246287 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:11:42.246505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:11:42.252690 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:11:42.252928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:11:42.254233 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:11:42.256349 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:11:42.258562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:11:42.279517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:11:42.280781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:11:42.280899 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:11:42.283114 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:11:42.284219 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:11:42.286152 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:11:42.286373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:11:42.288223 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:11:42.288483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:11:42.290124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:11:42.290421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:11:42.292351 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:11:42.292597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:11:42.298177 systemd[1]: Finished ensure-sysext.service. Jul 15 05:11:42.304505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:11:42.304631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:11:42.307630 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 05:11:42.342222 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:11:42.345794 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:11:42.360291 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:11:42.475535 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:11:42.496085 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:11:42.496414 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 05:11:42.497718 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:11:42.593804 systemd-resolved[1404]: Positive Trust Anchors: Jul 15 05:11:42.593823 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:11:42.593855 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:11:42.608288 systemd-resolved[1404]: Defaulting to hostname 'linux'. Jul 15 05:11:42.610341 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:11:42.611664 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:11:42.616948 augenrules[1452]: No rules Jul 15 05:11:42.617918 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:11:42.618208 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:11:42.653437 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:11:42.657440 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:11:42.660449 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:11:42.689442 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:11:42.707518 systemd-udevd[1459]: Using default interface naming scheme 'v255'. Jul 15 05:11:42.733923 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:11:42.736116 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:11:42.738895 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:11:42.740218 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:11:42.741556 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:11:42.742982 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:11:42.745788 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:11:42.747405 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:11:42.749198 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:11:42.749241 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:11:42.750551 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:11:42.753197 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:11:42.756974 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:11:42.762983 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:11:42.764563 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:11:42.766363 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:11:42.778039 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:11:42.780504 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:11:42.788923 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:11:42.790864 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:11:42.803848 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:11:42.805102 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:11:42.806331 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:11:42.806376 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:11:42.810912 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:11:42.814705 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:11:42.817020 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:11:42.820070 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:11:42.821141 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:11:42.829072 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:11:42.833753 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:11:42.836563 jq[1495]: false Jul 15 05:11:42.836688 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:11:42.840612 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:11:42.845286 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:11:42.845745 extend-filesystems[1496]: Found /dev/vda6 Jul 15 05:11:42.853378 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:11:42.856514 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:11:42.857516 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:11:42.857919 oslogin_cache_refresh[1498]: Refreshing passwd entry cache Jul 15 05:11:42.861019 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Refreshing passwd entry cache Jul 15 05:11:42.861019 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Failure getting users, quitting Jul 15 05:11:42.861019 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:11:42.861019 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Refreshing group entry cache Jul 15 05:11:42.861019 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Failure getting groups, quitting Jul 15 05:11:42.861019 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:11:42.860209 oslogin_cache_refresh[1498]: Failure getting users, quitting Jul 15 05:11:42.860224 oslogin_cache_refresh[1498]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:11:42.860272 oslogin_cache_refresh[1498]: Refreshing group entry cache Jul 15 05:11:42.860789 oslogin_cache_refresh[1498]: Failure getting groups, quitting Jul 15 05:11:42.860798 oslogin_cache_refresh[1498]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:11:42.862890 extend-filesystems[1496]: Found /dev/vda9 Jul 15 05:11:42.862057 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:11:42.866904 extend-filesystems[1496]: Checking size of /dev/vda9 Jul 15 05:11:42.867912 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:11:42.872925 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:11:42.874483 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:11:42.875672 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:11:42.876010 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:11:42.879816 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:11:42.883017 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:11:42.883284 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:11:42.904863 jq[1513]: true Jul 15 05:11:42.915370 update_engine[1510]: I20250715 05:11:42.914636 1510 main.cc:92] Flatcar Update Engine starting Jul 15 05:11:42.916988 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:11:42.921815 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:11:42.926101 jq[1535]: true Jul 15 05:11:42.932917 tar[1522]: linux-amd64/LICENSE Jul 15 05:11:42.933197 tar[1522]: linux-amd64/helm Jul 15 05:11:42.938152 systemd-networkd[1492]: lo: Link UP Jul 15 05:11:42.938165 systemd-networkd[1492]: lo: Gained carrier Jul 15 05:11:42.942365 systemd-networkd[1492]: Enumeration completed Jul 15 05:11:42.943079 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:11:42.944715 systemd[1]: Reached target network.target - Network. Jul 15 05:11:42.946713 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:11:42.946727 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:11:42.948385 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:11:42.949389 systemd-networkd[1492]: eth0: Link UP Jul 15 05:11:42.950674 systemd-networkd[1492]: eth0: Gained carrier Jul 15 05:11:42.950691 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:11:42.952107 dbus-daemon[1493]: [system] SELinux support is enabled Jul 15 05:11:42.952561 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:11:43.019326 update_engine[1510]: I20250715 05:11:43.019165 1510 update_check_scheduler.cc:74] Next update check in 8m46s Jul 15 05:11:43.020280 extend-filesystems[1496]: Resized partition /dev/vda9 Jul 15 05:11:43.027314 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:11:43.045097 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 05:11:43.046234 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:11:43.050531 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jul 15 05:11:43.639584 systemd-timesyncd[1422]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 05:11:43.639637 systemd-timesyncd[1422]: Initial clock synchronization to Tue 2025-07-15 05:11:43.639472 UTC. Jul 15 05:11:43.640093 systemd-resolved[1404]: Clock change detected. Flushing caches. Jul 15 05:11:43.651543 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:11:43.652005 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:11:43.652062 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:11:43.654027 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:11:43.654047 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:11:43.655291 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:11:43.655312 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:11:43.657875 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:11:43.703421 systemd-logind[1508]: New seat seat0. Jul 15 05:11:43.705074 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:11:43.711952 extend-filesystems[1571]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:11:43.714937 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:11:43.721532 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:11:43.753470 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 05:11:43.756634 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 05:11:43.757067 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 05:11:43.776589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 05:11:43.785735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:11:43.851007 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 05:11:43.855446 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:11:43.892672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:11:43.937972 locksmithd[1565]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:11:43.947180 systemd-logind[1508]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:11:43.957686 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:11:44.005970 systemd-logind[1508]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:11:44.028494 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 05:11:44.029949 kernel: kvm_amd: TSC scaling supported Jul 15 05:11:44.029983 kernel: kvm_amd: Nested Virtualization enabled Jul 15 05:11:44.030000 kernel: kvm_amd: Nested Paging enabled Jul 15 05:11:44.030995 kernel: kvm_amd: LBR virtualization supported Jul 15 05:11:44.031028 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 15 05:11:44.032480 kernel: kvm_amd: Virtual GIF supported Jul 15 05:11:44.062968 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 05:11:44.062968 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 05:11:44.062968 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 05:11:44.066823 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Jul 15 05:11:44.072145 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:11:44.068503 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:11:44.068991 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:11:44.093487 kernel: EDAC MC: Ver: 3.0.0 Jul 15 05:11:44.120482 containerd[1563]: time="2025-07-15T05:11:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:11:44.121553 containerd[1563]: time="2025-07-15T05:11:44.121512319Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:11:44.137789 containerd[1563]: time="2025-07-15T05:11:44.137712048Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.975µs" Jul 15 05:11:44.137835 containerd[1563]: time="2025-07-15T05:11:44.137785796Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:11:44.137888 containerd[1563]: time="2025-07-15T05:11:44.137838404Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:11:44.138176 containerd[1563]: time="2025-07-15T05:11:44.138147183Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:11:44.138204 containerd[1563]: time="2025-07-15T05:11:44.138179754Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:11:44.138248 containerd[1563]: time="2025-07-15T05:11:44.138223536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:11:44.138358 containerd[1563]: time="2025-07-15T05:11:44.138328563Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:11:44.138392 containerd[1563]: time="2025-07-15T05:11:44.138360413Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:11:44.138841 containerd[1563]: time="2025-07-15T05:11:44.138805568Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:11:44.138868 containerd[1563]: time="2025-07-15T05:11:44.138837668Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:11:44.138868 containerd[1563]: time="2025-07-15T05:11:44.138858126Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:11:44.138907 containerd[1563]: time="2025-07-15T05:11:44.138875088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:11:44.139033 containerd[1563]: time="2025-07-15T05:11:44.139003739Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:11:44.139419 containerd[1563]: time="2025-07-15T05:11:44.139377029Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:11:44.139493 containerd[1563]: time="2025-07-15T05:11:44.139462920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:11:44.139529 containerd[1563]: time="2025-07-15T05:11:44.139491894Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:11:44.139555 containerd[1563]: time="2025-07-15T05:11:44.139545615Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:11:44.140167 containerd[1563]: time="2025-07-15T05:11:44.140130672Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:11:44.140309 containerd[1563]: time="2025-07-15T05:11:44.140279611Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:11:44.149099 containerd[1563]: time="2025-07-15T05:11:44.149044562Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:11:44.149182 containerd[1563]: time="2025-07-15T05:11:44.149113952Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:11:44.149182 containerd[1563]: time="2025-07-15T05:11:44.149127748Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:11:44.149182 containerd[1563]: time="2025-07-15T05:11:44.149139891Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:11:44.149182 containerd[1563]: time="2025-07-15T05:11:44.149151533Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:11:44.149182 containerd[1563]: time="2025-07-15T05:11:44.149173804Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:11:44.149274 containerd[1563]: time="2025-07-15T05:11:44.149186819Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:11:44.149274 containerd[1563]: time="2025-07-15T05:11:44.149199452Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:11:44.149274 containerd[1563]: time="2025-07-15T05:11:44.149217135Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:11:44.149274 containerd[1563]: time="2025-07-15T05:11:44.149226914Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:11:44.149274 containerd[1563]: time="2025-07-15T05:11:44.149236883Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:11:44.149274 containerd[1563]: time="2025-07-15T05:11:44.149249496Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151545841Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151593691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151619980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151639286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151652551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151665325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151678610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151690051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151702905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151715198Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151727301Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151811489Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151829312Z" level=info msg="Start snapshots syncer" Jul 15 05:11:44.152431 containerd[1563]: time="2025-07-15T05:11:44.151884185Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:11:44.152712 containerd[1563]: time="2025-07-15T05:11:44.152257195Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:11:44.152712 containerd[1563]: time="2025-07-15T05:11:44.152319541Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:11:44.152839 containerd[1563]: time="2025-07-15T05:11:44.152479201Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:11:44.152860 containerd[1563]: time="2025-07-15T05:11:44.152839977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:11:44.152885 containerd[1563]: time="2025-07-15T05:11:44.152868811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:11:44.152907 containerd[1563]: time="2025-07-15T05:11:44.152883148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:11:44.152907 containerd[1563]: time="2025-07-15T05:11:44.152897876Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:11:44.152945 containerd[1563]: time="2025-07-15T05:11:44.152912864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:11:44.152945 containerd[1563]: time="2025-07-15T05:11:44.152925948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:11:44.152945 containerd[1563]: time="2025-07-15T05:11:44.152939053Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:11:44.153003 containerd[1563]: time="2025-07-15T05:11:44.152965863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:11:44.153003 containerd[1563]: time="2025-07-15T05:11:44.152979739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:11:44.153046 containerd[1563]: time="2025-07-15T05:11:44.153000428Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:11:44.153067 containerd[1563]: time="2025-07-15T05:11:44.153046003Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:11:44.153088 containerd[1563]: time="2025-07-15T05:11:44.153065189Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:11:44.153182 containerd[1563]: time="2025-07-15T05:11:44.153077332Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:11:44.153215 containerd[1563]: time="2025-07-15T05:11:44.153182780Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:11:44.153215 containerd[1563]: time="2025-07-15T05:11:44.153198068Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:11:44.153254 containerd[1563]: time="2025-07-15T05:11:44.153216563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:11:44.153254 containerd[1563]: time="2025-07-15T05:11:44.153232423Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:11:44.153292 containerd[1563]: time="2025-07-15T05:11:44.153254805Z" level=info msg="runtime interface created" Jul 15 05:11:44.153292 containerd[1563]: time="2025-07-15T05:11:44.153262890Z" level=info msg="created NRI interface" Jul 15 05:11:44.153292 containerd[1563]: time="2025-07-15T05:11:44.153273971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:11:44.153292 containerd[1563]: time="2025-07-15T05:11:44.153287065Z" level=info msg="Connect containerd service" Jul 15 05:11:44.153368 containerd[1563]: time="2025-07-15T05:11:44.153316350Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:11:44.154646 containerd[1563]: time="2025-07-15T05:11:44.154601229Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:11:44.234775 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:11:44.237245 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:11:44.237692 tar[1522]: linux-amd64/README.md Jul 15 05:11:44.249140 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 05:11:44.259038 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:11:44.279023 containerd[1563]: time="2025-07-15T05:11:44.278958414Z" level=info msg="Start subscribing containerd event" Jul 15 05:11:44.279154 containerd[1563]: time="2025-07-15T05:11:44.279046078Z" level=info msg="Start recovering state" Jul 15 05:11:44.279211 containerd[1563]: time="2025-07-15T05:11:44.279157387Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:11:44.279289 containerd[1563]: time="2025-07-15T05:11:44.279241825Z" level=info msg="Start event monitor" Jul 15 05:11:44.279289 containerd[1563]: time="2025-07-15T05:11:44.279259949Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:11:44.279289 containerd[1563]: time="2025-07-15T05:11:44.279268916Z" level=info msg="Start streaming server" Jul 15 05:11:44.279289 containerd[1563]: time="2025-07-15T05:11:44.279278133Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:11:44.279384 containerd[1563]: time="2025-07-15T05:11:44.279306827Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:11:44.279384 containerd[1563]: time="2025-07-15T05:11:44.279318319Z" level=info msg="runtime interface starting up..." Jul 15 05:11:44.279384 containerd[1563]: time="2025-07-15T05:11:44.279326374Z" level=info msg="starting plugins..." Jul 15 05:11:44.279384 containerd[1563]: time="2025-07-15T05:11:44.279346552Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:11:44.279687 containerd[1563]: time="2025-07-15T05:11:44.279610967Z" level=info msg="containerd successfully booted in 0.159898s" Jul 15 05:11:44.279869 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:11:44.310662 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:11:44.338537 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:11:44.341685 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:11:44.364435 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:11:44.364763 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:11:44.367879 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:11:44.397985 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:11:44.401200 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:11:44.403399 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:11:44.404880 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:11:44.943535 systemd-networkd[1492]: eth0: Gained IPv6LL Jul 15 05:11:44.964785 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:11:44.984852 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:11:44.995140 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 05:11:45.001901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:11:45.011527 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:11:45.092886 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:11:45.097527 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 05:11:45.102754 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 05:11:45.106027 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:11:46.745010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:11:46.747175 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:11:46.749575 systemd[1]: Startup finished in 3.241s (kernel) + 6.527s (initrd) + 6.053s (userspace) = 15.822s. Jul 15 05:11:46.781129 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:11:47.511448 kubelet[1668]: E0715 05:11:47.511292 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:11:47.515695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:11:47.515953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:11:47.516610 systemd[1]: kubelet.service: Consumed 2.207s CPU time, 265.4M memory peak. Jul 15 05:11:47.658793 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:11:47.660888 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:39458.service - OpenSSH per-connection server daemon (10.0.0.1:39458). Jul 15 05:11:47.782891 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 39458 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:11:47.785041 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:11:47.792973 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:11:47.794339 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:11:47.802279 systemd-logind[1508]: New session 1 of user core. Jul 15 05:11:47.826941 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:11:47.830772 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:11:47.846108 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:11:47.850125 systemd-logind[1508]: New session c1 of user core. Jul 15 05:11:48.071078 systemd[1686]: Queued start job for default target default.target. Jul 15 05:11:48.094699 systemd[1686]: Created slice app.slice - User Application Slice. Jul 15 05:11:48.094737 systemd[1686]: Reached target paths.target - Paths. Jul 15 05:11:48.094804 systemd[1686]: Reached target timers.target - Timers. Jul 15 05:11:48.097163 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:11:48.113837 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:11:48.114077 systemd[1686]: Reached target sockets.target - Sockets. Jul 15 05:11:48.114211 systemd[1686]: Reached target basic.target - Basic System. Jul 15 05:11:48.114299 systemd[1686]: Reached target default.target - Main User Target. Jul 15 05:11:48.114403 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:11:48.114430 systemd[1686]: Startup finished in 254ms. Jul 15 05:11:48.117851 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:11:48.184813 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:42174.service - OpenSSH per-connection server daemon (10.0.0.1:42174). Jul 15 05:11:48.256631 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 42174 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:11:48.259253 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:11:48.266283 systemd-logind[1508]: New session 2 of user core. Jul 15 05:11:48.276814 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:11:48.337064 sshd[1700]: Connection closed by 10.0.0.1 port 42174 Jul 15 05:11:48.337640 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jul 15 05:11:48.350515 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:42174.service: Deactivated successfully. Jul 15 05:11:48.353903 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:11:48.354912 systemd-logind[1508]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:11:48.360070 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:42186.service - OpenSSH per-connection server daemon (10.0.0.1:42186). Jul 15 05:11:48.361149 systemd-logind[1508]: Removed session 2. Jul 15 05:11:48.424954 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 42186 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:11:48.427007 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:11:48.433338 systemd-logind[1508]: New session 3 of user core. Jul 15 05:11:48.443593 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:11:48.495689 sshd[1709]: Connection closed by 10.0.0.1 port 42186 Jul 15 05:11:48.496154 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 15 05:11:48.510170 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:42186.service: Deactivated successfully. Jul 15 05:11:48.512384 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:11:48.513335 systemd-logind[1508]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:11:48.516438 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:42188.service - OpenSSH per-connection server daemon (10.0.0.1:42188). Jul 15 05:11:48.517216 systemd-logind[1508]: Removed session 3. Jul 15 05:11:48.586485 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 42188 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:11:48.588777 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:11:48.595343 systemd-logind[1508]: New session 4 of user core. Jul 15 05:11:48.603844 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:11:48.663689 sshd[1718]: Connection closed by 10.0.0.1 port 42188 Jul 15 05:11:48.664100 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Jul 15 05:11:48.678790 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:42188.service: Deactivated successfully. Jul 15 05:11:48.681081 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:11:48.682180 systemd-logind[1508]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:11:48.685507 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:42190.service - OpenSSH per-connection server daemon (10.0.0.1:42190). Jul 15 05:11:48.687935 systemd-logind[1508]: Removed session 4. Jul 15 05:11:48.740247 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 42190 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:11:48.741949 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:11:48.746827 systemd-logind[1508]: New session 5 of user core. Jul 15 05:11:48.760672 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:11:48.822194 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:11:48.822684 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:11:48.844891 sudo[1728]: pam_unix(sudo:session): session closed for user root Jul 15 05:11:48.846944 sshd[1727]: Connection closed by 10.0.0.1 port 42190 Jul 15 05:11:48.847344 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Jul 15 05:11:48.861431 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:42190.service: Deactivated successfully. Jul 15 05:11:48.863755 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:11:48.864917 systemd-logind[1508]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:11:48.868491 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:42206.service - OpenSSH per-connection server daemon (10.0.0.1:42206). Jul 15 05:11:48.869680 systemd-logind[1508]: Removed session 5. Jul 15 05:11:48.940756 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 42206 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:11:48.943439 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:11:48.950005 systemd-logind[1508]: New session 6 of user core. Jul 15 05:11:48.961751 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:11:49.020150 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:11:49.020593 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:11:49.196780 sudo[1739]: pam_unix(sudo:session): session closed for user root Jul 15 05:11:49.205101 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:11:49.205537 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:11:49.217040 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:11:49.278153 augenrules[1761]: No rules Jul 15 05:11:49.280226 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:11:49.280599 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:11:49.282115 sudo[1738]: pam_unix(sudo:session): session closed for user root Jul 15 05:11:49.284143 sshd[1737]: Connection closed by 10.0.0.1 port 42206 Jul 15 05:11:49.284633 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jul 15 05:11:49.294709 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:42206.service: Deactivated successfully. Jul 15 05:11:49.296976 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:11:49.297922 systemd-logind[1508]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:11:49.300960 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:42222.service - OpenSSH per-connection server daemon (10.0.0.1:42222). Jul 15 05:11:49.302189 systemd-logind[1508]: Removed session 6. Jul 15 05:11:49.363563 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 42222 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:11:49.365821 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:11:49.372589 systemd-logind[1508]: New session 7 of user core. Jul 15 05:11:49.382671 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:11:49.441176 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:11:49.441851 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:11:49.789112 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:11:49.810896 (dockerd)[1794]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:11:50.066285 dockerd[1794]: time="2025-07-15T05:11:50.066089438Z" level=info msg="Starting up" Jul 15 05:11:50.067076 dockerd[1794]: time="2025-07-15T05:11:50.067041743Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:11:50.085706 dockerd[1794]: time="2025-07-15T05:11:50.085621433Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:11:51.627773 dockerd[1794]: time="2025-07-15T05:11:51.627682918Z" level=info msg="Loading containers: start." Jul 15 05:11:51.664476 kernel: Initializing XFRM netlink socket Jul 15 05:11:53.044240 systemd-networkd[1492]: docker0: Link UP Jul 15 05:11:53.278635 dockerd[1794]: time="2025-07-15T05:11:53.278521711Z" level=info msg="Loading containers: done." Jul 15 05:11:53.296638 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3521798025-merged.mount: Deactivated successfully. Jul 15 05:11:53.340010 dockerd[1794]: time="2025-07-15T05:11:53.339924730Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:11:53.340273 dockerd[1794]: time="2025-07-15T05:11:53.340185939Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:11:53.340685 dockerd[1794]: time="2025-07-15T05:11:53.340594215Z" level=info msg="Initializing buildkit" Jul 15 05:11:53.934825 dockerd[1794]: time="2025-07-15T05:11:53.934739735Z" level=info msg="Completed buildkit initialization" Jul 15 05:11:53.943168 dockerd[1794]: time="2025-07-15T05:11:53.943099577Z" level=info msg="Daemon has completed initialization" Jul 15 05:11:53.943322 dockerd[1794]: time="2025-07-15T05:11:53.943172985Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:11:53.943590 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:11:54.944760 containerd[1563]: time="2025-07-15T05:11:54.944713115Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 15 05:11:55.906176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257399219.mount: Deactivated successfully. Jul 15 05:11:56.873922 containerd[1563]: time="2025-07-15T05:11:56.873830761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:56.874756 containerd[1563]: time="2025-07-15T05:11:56.874702616Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 15 05:11:56.876167 containerd[1563]: time="2025-07-15T05:11:56.876037598Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:56.882387 containerd[1563]: time="2025-07-15T05:11:56.882330034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:56.883303 containerd[1563]: time="2025-07-15T05:11:56.883260930Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.938501237s" Jul 15 05:11:56.883361 containerd[1563]: time="2025-07-15T05:11:56.883302457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 15 05:11:56.884114 containerd[1563]: time="2025-07-15T05:11:56.884071549Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 15 05:11:57.612820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:11:57.614775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:11:57.976859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:11:57.997902 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:11:58.133392 kubelet[2075]: E0715 05:11:58.133194 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:11:58.141079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:11:58.141571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:11:58.142290 systemd[1]: kubelet.service: Consumed 289ms CPU time, 110.8M memory peak. Jul 15 05:11:58.489062 containerd[1563]: time="2025-07-15T05:11:58.488973677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:58.489840 containerd[1563]: time="2025-07-15T05:11:58.489782985Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 15 05:11:58.491069 containerd[1563]: time="2025-07-15T05:11:58.491022999Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:58.493502 containerd[1563]: time="2025-07-15T05:11:58.493464937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:58.494288 containerd[1563]: time="2025-07-15T05:11:58.494253425Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.610149736s" Jul 15 05:11:58.494341 containerd[1563]: time="2025-07-15T05:11:58.494288681Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 15 05:11:58.495065 containerd[1563]: time="2025-07-15T05:11:58.495033427Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 15 05:12:00.694587 containerd[1563]: time="2025-07-15T05:12:00.694490354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:00.720678 containerd[1563]: time="2025-07-15T05:12:00.720462953Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 15 05:12:00.733899 containerd[1563]: time="2025-07-15T05:12:00.733827186Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:00.739175 containerd[1563]: time="2025-07-15T05:12:00.739110631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:00.740294 containerd[1563]: time="2025-07-15T05:12:00.740259154Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.245194157s" Jul 15 05:12:00.740358 containerd[1563]: time="2025-07-15T05:12:00.740316131Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 15 05:12:00.741060 containerd[1563]: time="2025-07-15T05:12:00.740897621Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 15 05:12:01.986088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount61433636.mount: Deactivated successfully. Jul 15 05:12:02.676310 containerd[1563]: time="2025-07-15T05:12:02.676215591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:02.677473 containerd[1563]: time="2025-07-15T05:12:02.677432842Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 15 05:12:02.679517 containerd[1563]: time="2025-07-15T05:12:02.679458911Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:02.682049 containerd[1563]: time="2025-07-15T05:12:02.681993422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:02.682774 containerd[1563]: time="2025-07-15T05:12:02.682720555Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.941787488s" Jul 15 05:12:02.682774 containerd[1563]: time="2025-07-15T05:12:02.682757444Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 15 05:12:02.683253 containerd[1563]: time="2025-07-15T05:12:02.683227435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 05:12:03.548001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238306991.mount: Deactivated successfully. Jul 15 05:12:05.095744 containerd[1563]: time="2025-07-15T05:12:05.095609220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:05.097674 containerd[1563]: time="2025-07-15T05:12:05.097554957Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 05:12:05.099273 containerd[1563]: time="2025-07-15T05:12:05.099220860Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:05.102557 containerd[1563]: time="2025-07-15T05:12:05.102499957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:05.103855 containerd[1563]: time="2025-07-15T05:12:05.103774677Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.420518608s" Jul 15 05:12:05.103855 containerd[1563]: time="2025-07-15T05:12:05.103830632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 05:12:05.104681 containerd[1563]: time="2025-07-15T05:12:05.104638937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:12:05.634138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288897421.mount: Deactivated successfully. Jul 15 05:12:05.639942 containerd[1563]: time="2025-07-15T05:12:05.639876316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:12:05.640626 containerd[1563]: time="2025-07-15T05:12:05.640571599Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:12:05.642090 containerd[1563]: time="2025-07-15T05:12:05.642033139Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:12:05.644821 containerd[1563]: time="2025-07-15T05:12:05.644759701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:12:05.645681 containerd[1563]: time="2025-07-15T05:12:05.645639760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 540.969134ms" Jul 15 05:12:05.645681 containerd[1563]: time="2025-07-15T05:12:05.645670778Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:12:05.646651 containerd[1563]: time="2025-07-15T05:12:05.646511104Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 05:12:06.198902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2957499997.mount: Deactivated successfully. Jul 15 05:12:08.362926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:12:08.364897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:12:08.641468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:12:08.665941 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:12:08.740219 containerd[1563]: time="2025-07-15T05:12:08.740136336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:08.741133 containerd[1563]: time="2025-07-15T05:12:08.741109481Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 15 05:12:08.743257 containerd[1563]: time="2025-07-15T05:12:08.742886682Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:08.746253 containerd[1563]: time="2025-07-15T05:12:08.746207998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:08.747654 containerd[1563]: time="2025-07-15T05:12:08.747617541Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.101069167s" Jul 15 05:12:08.747654 containerd[1563]: time="2025-07-15T05:12:08.747645794Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 15 05:12:08.763066 kubelet[2220]: E0715 05:12:08.762936 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:12:08.767657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:12:08.767874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:12:08.768491 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110M memory peak. Jul 15 05:12:11.737159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:12:11.737453 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110M memory peak. Jul 15 05:12:11.741025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:12:11.780558 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-7.scope)... Jul 15 05:12:11.780580 systemd[1]: Reloading... Jul 15 05:12:11.904455 zram_generator::config[2309]: No configuration found. Jul 15 05:12:12.352358 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:12:12.490959 systemd[1]: Reloading finished in 709 ms. Jul 15 05:12:12.573764 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:12:12.573936 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:12:12.574716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:12:12.574793 systemd[1]: kubelet.service: Consumed 189ms CPU time, 98.2M memory peak. Jul 15 05:12:12.577556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:12:12.802055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:12:12.810849 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:12:12.862974 kubelet[2348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:12:12.862974 kubelet[2348]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:12:12.862974 kubelet[2348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:12:12.863471 kubelet[2348]: I0715 05:12:12.863217 2348 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:12:14.053994 kubelet[2348]: I0715 05:12:14.053907 2348 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 05:12:14.053994 kubelet[2348]: I0715 05:12:14.053963 2348 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:12:14.054472 kubelet[2348]: I0715 05:12:14.054375 2348 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 05:12:14.125962 kubelet[2348]: E0715 05:12:14.125905 2348 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:14.128238 kubelet[2348]: I0715 05:12:14.128185 2348 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:12:14.359649 kubelet[2348]: I0715 05:12:14.359495 2348 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:12:14.365769 kubelet[2348]: I0715 05:12:14.365707 2348 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:12:14.505588 kubelet[2348]: I0715 05:12:14.505442 2348 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:12:14.505835 kubelet[2348]: I0715 05:12:14.505556 2348 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:12:14.505972 kubelet[2348]: I0715 05:12:14.505848 2348 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:12:14.505972 kubelet[2348]: I0715 05:12:14.505867 2348 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 05:12:14.506157 kubelet[2348]: I0715 05:12:14.506127 2348 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:12:14.537632 kubelet[2348]: I0715 05:12:14.537558 2348 kubelet.go:446] "Attempting to sync node with API server" Jul 15 05:12:14.537632 kubelet[2348]: I0715 05:12:14.537623 2348 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:12:14.537833 kubelet[2348]: I0715 05:12:14.537670 2348 kubelet.go:352] "Adding apiserver pod source" Jul 15 05:12:14.537833 kubelet[2348]: I0715 05:12:14.537697 2348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:12:14.542043 kubelet[2348]: I0715 05:12:14.542004 2348 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:12:14.542594 kubelet[2348]: I0715 05:12:14.542499 2348 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:12:14.543918 kubelet[2348]: W0715 05:12:14.543517 2348 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:12:14.543918 kubelet[2348]: W0715 05:12:14.543788 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:14.543918 kubelet[2348]: E0715 05:12:14.543846 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:14.544241 kubelet[2348]: W0715 05:12:14.544186 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:14.544296 kubelet[2348]: E0715 05:12:14.544250 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:14.546272 kubelet[2348]: I0715 05:12:14.546217 2348 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:12:14.546272 kubelet[2348]: I0715 05:12:14.546260 2348 server.go:1287] "Started kubelet" Jul 15 05:12:14.546593 kubelet[2348]: I0715 05:12:14.546554 2348 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:12:14.547045 kubelet[2348]: I0715 05:12:14.546960 2348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:12:14.547723 kubelet[2348]: I0715 05:12:14.547358 2348 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:12:14.547723 kubelet[2348]: I0715 05:12:14.547649 2348 server.go:479] "Adding debug handlers to kubelet server" Jul 15 05:12:14.549649 kubelet[2348]: I0715 05:12:14.549492 2348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:12:14.550217 kubelet[2348]: I0715 05:12:14.550191 2348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:12:14.554375 kubelet[2348]: E0715 05:12:14.553316 2348 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:12:14.554375 kubelet[2348]: E0715 05:12:14.553552 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:14.554375 kubelet[2348]: I0715 05:12:14.553580 2348 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:12:14.554375 kubelet[2348]: I0715 05:12:14.553788 2348 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:12:14.554375 kubelet[2348]: I0715 05:12:14.553847 2348 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:12:14.554375 kubelet[2348]: E0715 05:12:14.549709 2348 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185254b3697d3042 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 05:12:14.546235458 +0000 UTC m=+1.730414800,LastTimestamp:2025-07-15 05:12:14.546235458 +0000 UTC m=+1.730414800,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 05:12:14.554375 kubelet[2348]: I0715 05:12:14.554273 2348 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:12:14.554375 kubelet[2348]: W0715 05:12:14.554282 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:14.554804 kubelet[2348]: E0715 05:12:14.554329 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:14.554804 kubelet[2348]: I0715 05:12:14.554349 2348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:12:14.554804 kubelet[2348]: E0715 05:12:14.554395 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Jul 15 05:12:14.555526 kubelet[2348]: I0715 05:12:14.555483 2348 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:12:14.568084 kubelet[2348]: I0715 05:12:14.568033 2348 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:12:14.568084 kubelet[2348]: I0715 05:12:14.568057 2348 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:12:14.568084 kubelet[2348]: I0715 05:12:14.568078 2348 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:12:14.654116 kubelet[2348]: E0715 05:12:14.654044 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:14.754602 kubelet[2348]: E0715 05:12:14.754495 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:14.755041 kubelet[2348]: E0715 05:12:14.755002 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Jul 15 05:12:14.855300 kubelet[2348]: E0715 05:12:14.855211 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:14.869347 kubelet[2348]: I0715 05:12:14.869288 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:12:14.870776 kubelet[2348]: I0715 05:12:14.870733 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:12:14.871232 kubelet[2348]: I0715 05:12:14.870793 2348 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 05:12:14.871232 kubelet[2348]: I0715 05:12:14.870822 2348 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:12:14.871232 kubelet[2348]: I0715 05:12:14.870837 2348 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 05:12:14.871232 kubelet[2348]: E0715 05:12:14.870901 2348 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:12:14.871973 kubelet[2348]: W0715 05:12:14.871946 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:14.871973 kubelet[2348]: E0715 05:12:14.871979 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:14.956118 kubelet[2348]: E0715 05:12:14.955953 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:14.971207 kubelet[2348]: E0715 05:12:14.971152 2348 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:12:15.056154 kubelet[2348]: E0715 05:12:15.056076 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.156209 kubelet[2348]: E0715 05:12:15.156156 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.156468 kubelet[2348]: E0715 05:12:15.156383 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Jul 15 05:12:15.171439 kubelet[2348]: E0715 05:12:15.171354 2348 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:12:15.257041 kubelet[2348]: E0715 05:12:15.256886 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.333177 kubelet[2348]: I0715 05:12:15.332699 2348 policy_none.go:49] "None policy: Start" Jul 15 05:12:15.333177 kubelet[2348]: I0715 05:12:15.332749 2348 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:12:15.333177 kubelet[2348]: I0715 05:12:15.332771 2348 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:12:15.357785 kubelet[2348]: E0715 05:12:15.357705 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.458581 kubelet[2348]: E0715 05:12:15.458504 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.559217 kubelet[2348]: E0715 05:12:15.559033 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.572288 kubelet[2348]: E0715 05:12:15.572200 2348 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:12:15.660054 kubelet[2348]: E0715 05:12:15.659944 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.760594 kubelet[2348]: E0715 05:12:15.760517 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.861141 kubelet[2348]: E0715 05:12:15.861001 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:15.873731 kubelet[2348]: W0715 05:12:15.873648 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:15.873731 kubelet[2348]: E0715 05:12:15.873715 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:15.957198 kubelet[2348]: E0715 05:12:15.957109 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" Jul 15 05:12:15.961156 kubelet[2348]: E0715 05:12:15.961100 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:16.031160 kubelet[2348]: W0715 05:12:16.031062 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:16.031160 kubelet[2348]: E0715 05:12:16.031148 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:16.061369 kubelet[2348]: E0715 05:12:16.061270 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:16.092062 kubelet[2348]: W0715 05:12:16.091981 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:16.092062 kubelet[2348]: E0715 05:12:16.092062 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:16.162003 kubelet[2348]: E0715 05:12:16.161939 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:16.199634 kubelet[2348]: E0715 05:12:16.199554 2348 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:16.262930 kubelet[2348]: E0715 05:12:16.262868 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:16.319801 kubelet[2348]: W0715 05:12:16.319728 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:16.319801 kubelet[2348]: E0715 05:12:16.319785 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:16.363551 kubelet[2348]: E0715 05:12:16.363467 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:16.372752 kubelet[2348]: E0715 05:12:16.372689 2348 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:12:16.454150 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:12:16.464144 kubelet[2348]: E0715 05:12:16.464061 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:16.471473 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:12:16.476025 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:12:16.495120 kubelet[2348]: I0715 05:12:16.495081 2348 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:12:16.495609 kubelet[2348]: I0715 05:12:16.495589 2348 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:12:16.495718 kubelet[2348]: I0715 05:12:16.495677 2348 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:12:16.496086 kubelet[2348]: I0715 05:12:16.496068 2348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:12:16.497129 kubelet[2348]: E0715 05:12:16.497097 2348 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:12:16.497185 kubelet[2348]: E0715 05:12:16.497141 2348 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 05:12:16.598418 kubelet[2348]: I0715 05:12:16.598356 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:12:16.598905 kubelet[2348]: E0715 05:12:16.598853 2348 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jul 15 05:12:16.800699 kubelet[2348]: I0715 05:12:16.800557 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:12:16.800980 kubelet[2348]: E0715 05:12:16.800950 2348 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jul 15 05:12:17.203041 kubelet[2348]: I0715 05:12:17.202993 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:12:17.203594 kubelet[2348]: E0715 05:12:17.203360 2348 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jul 15 05:12:17.272623 kubelet[2348]: E0715 05:12:17.272455 2348 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185254b3697d3042 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 05:12:14.546235458 +0000 UTC m=+1.730414800,LastTimestamp:2025-07-15 05:12:14.546235458 +0000 UTC m=+1.730414800,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 05:12:17.559404 kubelet[2348]: E0715 05:12:17.559207 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="3.2s" Jul 15 05:12:17.970443 kubelet[2348]: W0715 05:12:17.970324 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:17.970443 kubelet[2348]: E0715 05:12:17.970391 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:17.985505 systemd[1]: Created slice kubepods-burstable-poda7d369169cd52faf0e66891a6e14b81d.slice - libcontainer container kubepods-burstable-poda7d369169cd52faf0e66891a6e14b81d.slice. Jul 15 05:12:18.005621 kubelet[2348]: I0715 05:12:18.005398 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:12:18.005950 kubelet[2348]: E0715 05:12:18.005900 2348 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jul 15 05:12:18.028093 kubelet[2348]: E0715 05:12:18.028027 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:18.031650 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 15 05:12:18.041920 kubelet[2348]: E0715 05:12:18.041852 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:18.045284 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 15 05:12:18.047476 kubelet[2348]: E0715 05:12:18.047428 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:18.064334 kubelet[2348]: W0715 05:12:18.064245 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:18.064334 kubelet[2348]: E0715 05:12:18.064329 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:18.073947 kubelet[2348]: I0715 05:12:18.073884 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 05:12:18.073947 kubelet[2348]: I0715 05:12:18.073933 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7d369169cd52faf0e66891a6e14b81d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7d369169cd52faf0e66891a6e14b81d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:18.073947 kubelet[2348]: I0715 05:12:18.073953 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7d369169cd52faf0e66891a6e14b81d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7d369169cd52faf0e66891a6e14b81d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:18.073947 kubelet[2348]: I0715 05:12:18.073970 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:18.074261 kubelet[2348]: I0715 05:12:18.073988 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:18.074261 kubelet[2348]: I0715 05:12:18.074006 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:18.074261 kubelet[2348]: I0715 05:12:18.074030 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7d369169cd52faf0e66891a6e14b81d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a7d369169cd52faf0e66891a6e14b81d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:18.074261 kubelet[2348]: I0715 05:12:18.074046 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:18.074261 kubelet[2348]: I0715 05:12:18.074061 2348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:18.314210 kubelet[2348]: W0715 05:12:18.314063 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:18.314210 kubelet[2348]: E0715 05:12:18.314127 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:18.329496 containerd[1563]: time="2025-07-15T05:12:18.329400298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a7d369169cd52faf0e66891a6e14b81d,Namespace:kube-system,Attempt:0,}" Jul 15 05:12:18.343342 containerd[1563]: time="2025-07-15T05:12:18.343279334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 15 05:12:18.349404 containerd[1563]: time="2025-07-15T05:12:18.349337429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 15 05:12:18.371202 containerd[1563]: time="2025-07-15T05:12:18.371140072Z" level=info msg="connecting to shim c0945d054c9e5c5ce166c231e5a87825cd42c566d5c1d4236a32e58aa1db4fdb" address="unix:///run/containerd/s/fc915ac916850211e5a3a8f50ab95583695528c66c4b6fbb7e9acc6b9dbcc5a7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:12:18.398664 containerd[1563]: time="2025-07-15T05:12:18.398597407Z" level=info msg="connecting to shim 3122c8e38a03350fc12d1e4429927f9087fcf185dfb83f2d9c34dc3a36da3cfe" address="unix:///run/containerd/s/2c46bf95f99307695fd436a40e155a1f8dc89f99eadf4be5f41e82e9431116b9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:12:18.400757 containerd[1563]: time="2025-07-15T05:12:18.400023587Z" level=info msg="connecting to shim 7f8371056bde220515533c2d249a2487bf5ffec0ac466931955d5fb08c5c4138" address="unix:///run/containerd/s/1fd56b1832bfdd37f2ce3b0c049799befe1c78e8b6a5c05c290e8b471bd1ab17" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:12:18.406756 systemd[1]: Started cri-containerd-c0945d054c9e5c5ce166c231e5a87825cd42c566d5c1d4236a32e58aa1db4fdb.scope - libcontainer container c0945d054c9e5c5ce166c231e5a87825cd42c566d5c1d4236a32e58aa1db4fdb. Jul 15 05:12:18.437592 systemd[1]: Started cri-containerd-3122c8e38a03350fc12d1e4429927f9087fcf185dfb83f2d9c34dc3a36da3cfe.scope - libcontainer container 3122c8e38a03350fc12d1e4429927f9087fcf185dfb83f2d9c34dc3a36da3cfe. Jul 15 05:12:18.439600 systemd[1]: Started cri-containerd-7f8371056bde220515533c2d249a2487bf5ffec0ac466931955d5fb08c5c4138.scope - libcontainer container 7f8371056bde220515533c2d249a2487bf5ffec0ac466931955d5fb08c5c4138. Jul 15 05:12:18.492693 containerd[1563]: time="2025-07-15T05:12:18.492628772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a7d369169cd52faf0e66891a6e14b81d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0945d054c9e5c5ce166c231e5a87825cd42c566d5c1d4236a32e58aa1db4fdb\"" Jul 15 05:12:18.496384 containerd[1563]: time="2025-07-15T05:12:18.496321761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3122c8e38a03350fc12d1e4429927f9087fcf185dfb83f2d9c34dc3a36da3cfe\"" Jul 15 05:12:18.497630 containerd[1563]: time="2025-07-15T05:12:18.497597483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f8371056bde220515533c2d249a2487bf5ffec0ac466931955d5fb08c5c4138\"" Jul 15 05:12:18.497782 containerd[1563]: time="2025-07-15T05:12:18.497746518Z" level=info msg="CreateContainer within sandbox \"c0945d054c9e5c5ce166c231e5a87825cd42c566d5c1d4236a32e58aa1db4fdb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:12:18.500772 containerd[1563]: time="2025-07-15T05:12:18.500725721Z" level=info msg="CreateContainer within sandbox \"3122c8e38a03350fc12d1e4429927f9087fcf185dfb83f2d9c34dc3a36da3cfe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:12:18.500931 containerd[1563]: time="2025-07-15T05:12:18.500906657Z" level=info msg="CreateContainer within sandbox \"7f8371056bde220515533c2d249a2487bf5ffec0ac466931955d5fb08c5c4138\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:12:18.510570 containerd[1563]: time="2025-07-15T05:12:18.510502624Z" level=info msg="Container 1277ae4f70abe44eb88bae3b85285e165aa37e8a97824f078a85ec4d0290b6b9: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:12:18.517952 containerd[1563]: time="2025-07-15T05:12:18.517880667Z" level=info msg="Container 6e7190d1cf3d2efb9551fc1f2e5532fbbb9a6a8b09949bd549193240c8dde483: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:12:18.524232 containerd[1563]: time="2025-07-15T05:12:18.524153113Z" level=info msg="CreateContainer within sandbox \"c0945d054c9e5c5ce166c231e5a87825cd42c566d5c1d4236a32e58aa1db4fdb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1277ae4f70abe44eb88bae3b85285e165aa37e8a97824f078a85ec4d0290b6b9\"" Jul 15 05:12:18.525025 containerd[1563]: time="2025-07-15T05:12:18.524992309Z" level=info msg="StartContainer for \"1277ae4f70abe44eb88bae3b85285e165aa37e8a97824f078a85ec4d0290b6b9\"" Jul 15 05:12:18.527225 containerd[1563]: time="2025-07-15T05:12:18.527168515Z" level=info msg="connecting to shim 1277ae4f70abe44eb88bae3b85285e165aa37e8a97824f078a85ec4d0290b6b9" address="unix:///run/containerd/s/fc915ac916850211e5a3a8f50ab95583695528c66c4b6fbb7e9acc6b9dbcc5a7" protocol=ttrpc version=3 Jul 15 05:12:18.530376 containerd[1563]: time="2025-07-15T05:12:18.530155081Z" level=info msg="Container 3f8f362d016568c802edbcd1d165e7ba6548f6310b2756f0409d4c64f0318227: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:12:18.537573 containerd[1563]: time="2025-07-15T05:12:18.537329884Z" level=info msg="CreateContainer within sandbox \"3122c8e38a03350fc12d1e4429927f9087fcf185dfb83f2d9c34dc3a36da3cfe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e7190d1cf3d2efb9551fc1f2e5532fbbb9a6a8b09949bd549193240c8dde483\"" Jul 15 05:12:18.538326 containerd[1563]: time="2025-07-15T05:12:18.538250156Z" level=info msg="StartContainer for \"6e7190d1cf3d2efb9551fc1f2e5532fbbb9a6a8b09949bd549193240c8dde483\"" Jul 15 05:12:18.539832 containerd[1563]: time="2025-07-15T05:12:18.539804330Z" level=info msg="connecting to shim 6e7190d1cf3d2efb9551fc1f2e5532fbbb9a6a8b09949bd549193240c8dde483" address="unix:///run/containerd/s/2c46bf95f99307695fd436a40e155a1f8dc89f99eadf4be5f41e82e9431116b9" protocol=ttrpc version=3 Jul 15 05:12:18.545715 containerd[1563]: time="2025-07-15T05:12:18.545664317Z" level=info msg="CreateContainer within sandbox \"7f8371056bde220515533c2d249a2487bf5ffec0ac466931955d5fb08c5c4138\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f8f362d016568c802edbcd1d165e7ba6548f6310b2756f0409d4c64f0318227\"" Jul 15 05:12:18.547338 containerd[1563]: time="2025-07-15T05:12:18.547296140Z" level=info msg="StartContainer for \"3f8f362d016568c802edbcd1d165e7ba6548f6310b2756f0409d4c64f0318227\"" Jul 15 05:12:18.550915 containerd[1563]: time="2025-07-15T05:12:18.550865723Z" level=info msg="connecting to shim 3f8f362d016568c802edbcd1d165e7ba6548f6310b2756f0409d4c64f0318227" address="unix:///run/containerd/s/1fd56b1832bfdd37f2ce3b0c049799befe1c78e8b6a5c05c290e8b471bd1ab17" protocol=ttrpc version=3 Jul 15 05:12:18.551597 systemd[1]: Started cri-containerd-1277ae4f70abe44eb88bae3b85285e165aa37e8a97824f078a85ec4d0290b6b9.scope - libcontainer container 1277ae4f70abe44eb88bae3b85285e165aa37e8a97824f078a85ec4d0290b6b9. Jul 15 05:12:18.557759 systemd[1]: Started cri-containerd-6e7190d1cf3d2efb9551fc1f2e5532fbbb9a6a8b09949bd549193240c8dde483.scope - libcontainer container 6e7190d1cf3d2efb9551fc1f2e5532fbbb9a6a8b09949bd549193240c8dde483. Jul 15 05:12:18.583666 systemd[1]: Started cri-containerd-3f8f362d016568c802edbcd1d165e7ba6548f6310b2756f0409d4c64f0318227.scope - libcontainer container 3f8f362d016568c802edbcd1d165e7ba6548f6310b2756f0409d4c64f0318227. Jul 15 05:12:18.623487 containerd[1563]: time="2025-07-15T05:12:18.623439466Z" level=info msg="StartContainer for \"1277ae4f70abe44eb88bae3b85285e165aa37e8a97824f078a85ec4d0290b6b9\" returns successfully" Jul 15 05:12:18.640088 containerd[1563]: time="2025-07-15T05:12:18.640012899Z" level=info msg="StartContainer for \"6e7190d1cf3d2efb9551fc1f2e5532fbbb9a6a8b09949bd549193240c8dde483\" returns successfully" Jul 15 05:12:18.657445 kubelet[2348]: W0715 05:12:18.657318 2348 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jul 15 05:12:18.657445 kubelet[2348]: E0715 05:12:18.657398 2348 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:12:18.737667 containerd[1563]: time="2025-07-15T05:12:18.737593307Z" level=info msg="StartContainer for \"3f8f362d016568c802edbcd1d165e7ba6548f6310b2756f0409d4c64f0318227\" returns successfully" Jul 15 05:12:18.887101 kubelet[2348]: E0715 05:12:18.886644 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:18.890849 kubelet[2348]: E0715 05:12:18.890678 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:18.891781 kubelet[2348]: E0715 05:12:18.891760 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:19.608020 kubelet[2348]: I0715 05:12:19.607961 2348 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:12:19.894525 kubelet[2348]: E0715 05:12:19.894467 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:19.900050 kubelet[2348]: E0715 05:12:19.899998 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:19.956704 kubelet[2348]: I0715 05:12:19.956643 2348 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 05:12:19.956704 kubelet[2348]: E0715 05:12:19.956705 2348 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 05:12:19.967941 kubelet[2348]: E0715 05:12:19.967873 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.021752 kubelet[2348]: E0715 05:12:20.021670 2348 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:12:20.068911 kubelet[2348]: E0715 05:12:20.068804 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.169360 kubelet[2348]: E0715 05:12:20.169120 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.269452 kubelet[2348]: E0715 05:12:20.269259 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.369783 kubelet[2348]: E0715 05:12:20.369721 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.470603 kubelet[2348]: E0715 05:12:20.470391 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.571026 kubelet[2348]: E0715 05:12:20.570935 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.671243 kubelet[2348]: E0715 05:12:20.671184 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.772381 kubelet[2348]: E0715 05:12:20.772210 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.873401 kubelet[2348]: E0715 05:12:20.873337 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:20.973846 kubelet[2348]: E0715 05:12:20.973781 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:21.074637 kubelet[2348]: E0715 05:12:21.074319 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:21.175628 kubelet[2348]: E0715 05:12:21.175505 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:21.276216 kubelet[2348]: E0715 05:12:21.276107 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:21.376397 kubelet[2348]: E0715 05:12:21.376314 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:21.477030 kubelet[2348]: E0715 05:12:21.476973 2348 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:21.654266 kubelet[2348]: I0715 05:12:21.654108 2348 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:21.661370 kubelet[2348]: I0715 05:12:21.661317 2348 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:21.665196 kubelet[2348]: I0715 05:12:21.665166 2348 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:12:22.257587 systemd[1]: Reload requested from client PID 2625 ('systemctl') (unit session-7.scope)... Jul 15 05:12:22.257609 systemd[1]: Reloading... Jul 15 05:12:22.355475 zram_generator::config[2668]: No configuration found. Jul 15 05:12:22.467675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:12:22.546034 kubelet[2348]: I0715 05:12:22.545864 2348 apiserver.go:52] "Watching apiserver" Jul 15 05:12:22.554983 kubelet[2348]: I0715 05:12:22.554905 2348 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:12:22.632402 systemd[1]: Reloading finished in 374 ms. Jul 15 05:12:22.664356 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:12:22.692385 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:12:22.692833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:12:22.692907 systemd[1]: kubelet.service: Consumed 1.458s CPU time, 133.6M memory peak. Jul 15 05:12:22.695475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:12:22.942667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:12:22.952916 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:12:22.994204 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:12:22.994204 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:12:22.994204 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:12:22.994673 kubelet[2713]: I0715 05:12:22.994343 2713 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:12:23.001951 kubelet[2713]: I0715 05:12:23.001892 2713 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 05:12:23.001951 kubelet[2713]: I0715 05:12:23.001923 2713 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:12:23.002275 kubelet[2713]: I0715 05:12:23.002246 2713 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 05:12:23.003705 kubelet[2713]: I0715 05:12:23.003679 2713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 05:12:23.006429 kubelet[2713]: I0715 05:12:23.006028 2713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:12:23.009988 kubelet[2713]: I0715 05:12:23.009940 2713 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:12:23.016430 kubelet[2713]: I0715 05:12:23.016362 2713 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:12:23.016708 kubelet[2713]: I0715 05:12:23.016661 2713 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:12:23.016880 kubelet[2713]: I0715 05:12:23.016698 2713 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:12:23.016975 kubelet[2713]: I0715 05:12:23.016885 2713 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:12:23.016975 kubelet[2713]: I0715 05:12:23.016895 2713 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 05:12:23.016975 kubelet[2713]: I0715 05:12:23.016954 2713 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:12:23.017170 kubelet[2713]: I0715 05:12:23.017134 2713 kubelet.go:446] "Attempting to sync node with API server" Jul 15 05:12:23.017306 kubelet[2713]: I0715 05:12:23.017198 2713 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:12:23.017306 kubelet[2713]: I0715 05:12:23.017230 2713 kubelet.go:352] "Adding apiserver pod source" Jul 15 05:12:23.017306 kubelet[2713]: I0715 05:12:23.017244 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:12:23.018758 kubelet[2713]: I0715 05:12:23.018722 2713 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:12:23.021435 kubelet[2713]: I0715 05:12:23.020996 2713 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:12:23.021500 kubelet[2713]: I0715 05:12:23.021478 2713 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:12:23.021535 kubelet[2713]: I0715 05:12:23.021505 2713 server.go:1287] "Started kubelet" Jul 15 05:12:23.024085 kubelet[2713]: I0715 05:12:23.024010 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:12:23.024202 kubelet[2713]: I0715 05:12:23.024180 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:12:23.025049 kubelet[2713]: I0715 05:12:23.025021 2713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:12:23.025553 kubelet[2713]: I0715 05:12:23.025534 2713 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:12:23.025775 kubelet[2713]: I0715 05:12:23.025725 2713 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:12:23.027850 kubelet[2713]: I0715 05:12:23.027806 2713 server.go:479] "Adding debug handlers to kubelet server" Jul 15 05:12:23.031252 kubelet[2713]: E0715 05:12:23.031177 2713 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:12:23.031375 kubelet[2713]: I0715 05:12:23.031277 2713 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:12:23.031600 kubelet[2713]: I0715 05:12:23.031551 2713 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:12:23.031774 kubelet[2713]: I0715 05:12:23.031747 2713 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:12:23.032890 kubelet[2713]: I0715 05:12:23.032863 2713 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:12:23.033033 kubelet[2713]: I0715 05:12:23.032978 2713 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:12:23.035317 kubelet[2713]: I0715 05:12:23.035252 2713 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:12:23.036591 kubelet[2713]: E0715 05:12:23.036542 2713 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:12:23.048138 kubelet[2713]: I0715 05:12:23.048077 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:12:23.050477 kubelet[2713]: I0715 05:12:23.050361 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:12:23.050477 kubelet[2713]: I0715 05:12:23.050396 2713 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 05:12:23.050477 kubelet[2713]: I0715 05:12:23.050449 2713 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:12:23.050477 kubelet[2713]: I0715 05:12:23.050460 2713 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 05:12:23.050640 kubelet[2713]: E0715 05:12:23.050519 2713 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:12:23.076676 kubelet[2713]: I0715 05:12:23.075970 2713 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:12:23.076676 kubelet[2713]: I0715 05:12:23.076664 2713 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:12:23.076676 kubelet[2713]: I0715 05:12:23.076692 2713 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:12:23.076875 kubelet[2713]: I0715 05:12:23.076846 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:12:23.076875 kubelet[2713]: I0715 05:12:23.076856 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:12:23.076875 kubelet[2713]: I0715 05:12:23.076875 2713 policy_none.go:49] "None policy: Start" Jul 15 05:12:23.076981 kubelet[2713]: I0715 05:12:23.076884 2713 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:12:23.076981 kubelet[2713]: I0715 05:12:23.076894 2713 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:12:23.077049 kubelet[2713]: I0715 05:12:23.076998 2713 state_mem.go:75] "Updated machine memory state" Jul 15 05:12:23.082124 kubelet[2713]: I0715 05:12:23.082091 2713 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:12:23.082485 kubelet[2713]: I0715 05:12:23.082303 2713 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:12:23.082485 kubelet[2713]: I0715 05:12:23.082321 2713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:12:23.082596 kubelet[2713]: I0715 05:12:23.082588 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:12:23.083517 kubelet[2713]: E0715 05:12:23.083496 2713 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:12:23.152189 kubelet[2713]: I0715 05:12:23.151962 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:23.152189 kubelet[2713]: I0715 05:12:23.152134 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:12:23.152451 kubelet[2713]: I0715 05:12:23.152150 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:23.184858 kubelet[2713]: I0715 05:12:23.184757 2713 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:12:23.332995 kubelet[2713]: I0715 05:12:23.332846 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:23.332995 kubelet[2713]: I0715 05:12:23.332903 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:23.332995 kubelet[2713]: I0715 05:12:23.332933 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:23.332995 kubelet[2713]: I0715 05:12:23.332961 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 05:12:23.332995 kubelet[2713]: I0715 05:12:23.332985 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7d369169cd52faf0e66891a6e14b81d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7d369169cd52faf0e66891a6e14b81d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:23.333243 kubelet[2713]: I0715 05:12:23.333016 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7d369169cd52faf0e66891a6e14b81d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7d369169cd52faf0e66891a6e14b81d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:23.333243 kubelet[2713]: I0715 05:12:23.333051 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7d369169cd52faf0e66891a6e14b81d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a7d369169cd52faf0e66891a6e14b81d\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:23.333243 kubelet[2713]: I0715 05:12:23.333073 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:23.333243 kubelet[2713]: I0715 05:12:23.333104 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:23.365852 kubelet[2713]: E0715 05:12:23.365692 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 05:12:23.365852 kubelet[2713]: E0715 05:12:23.365764 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:12:23.433957 kubelet[2713]: E0715 05:12:23.433108 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 05:12:23.479164 kubelet[2713]: I0715 05:12:23.479070 2713 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 05:12:23.479359 kubelet[2713]: I0715 05:12:23.479196 2713 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 05:12:24.018364 kubelet[2713]: I0715 05:12:24.018244 2713 apiserver.go:52] "Watching apiserver" Jul 15 05:12:24.032394 kubelet[2713]: I0715 05:12:24.032314 2713 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:12:24.064592 kubelet[2713]: I0715 05:12:24.064537 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:12:24.219792 kubelet[2713]: E0715 05:12:24.219718 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 05:12:24.583110 kubelet[2713]: I0715 05:12:24.583012 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.5829794489999998 podStartE2EDuration="3.582979449s" podCreationTimestamp="2025-07-15 05:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:12:24.569966587 +0000 UTC m=+1.612430635" watchObservedRunningTime="2025-07-15 05:12:24.582979449 +0000 UTC m=+1.625443518" Jul 15 05:12:24.593613 kubelet[2713]: I0715 05:12:24.593523 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.593500736 podStartE2EDuration="3.593500736s" podCreationTimestamp="2025-07-15 05:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:12:24.583586094 +0000 UTC m=+1.626050162" watchObservedRunningTime="2025-07-15 05:12:24.593500736 +0000 UTC m=+1.635964794" Jul 15 05:12:24.607218 kubelet[2713]: I0715 05:12:24.607108 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.607081348 podStartE2EDuration="3.607081348s" podCreationTimestamp="2025-07-15 05:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:12:24.593728519 +0000 UTC m=+1.636192567" watchObservedRunningTime="2025-07-15 05:12:24.607081348 +0000 UTC m=+1.649545396" Jul 15 05:12:28.930508 update_engine[1510]: I20250715 05:12:28.930269 1510 update_attempter.cc:509] Updating boot flags... Jul 15 05:12:29.420055 kubelet[2713]: I0715 05:12:29.419970 2713 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:12:29.420709 kubelet[2713]: I0715 05:12:29.420559 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:12:29.420753 containerd[1563]: time="2025-07-15T05:12:29.420311243Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:12:30.117629 systemd[1]: Created slice kubepods-besteffort-podefbe3dde_b744_4753_b7c9_7359bb0f35f3.slice - libcontainer container kubepods-besteffort-podefbe3dde_b744_4753_b7c9_7359bb0f35f3.slice. Jul 15 05:12:30.182732 kubelet[2713]: I0715 05:12:30.182562 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnfzw\" (UniqueName: \"kubernetes.io/projected/efbe3dde-b744-4753-b7c9-7359bb0f35f3-kube-api-access-qnfzw\") pod \"kube-proxy-4xqn5\" (UID: \"efbe3dde-b744-4753-b7c9-7359bb0f35f3\") " pod="kube-system/kube-proxy-4xqn5" Jul 15 05:12:30.182732 kubelet[2713]: I0715 05:12:30.182624 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efbe3dde-b744-4753-b7c9-7359bb0f35f3-lib-modules\") pod \"kube-proxy-4xqn5\" (UID: \"efbe3dde-b744-4753-b7c9-7359bb0f35f3\") " pod="kube-system/kube-proxy-4xqn5" Jul 15 05:12:30.182732 kubelet[2713]: I0715 05:12:30.182647 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efbe3dde-b744-4753-b7c9-7359bb0f35f3-kube-proxy\") pod \"kube-proxy-4xqn5\" (UID: \"efbe3dde-b744-4753-b7c9-7359bb0f35f3\") " pod="kube-system/kube-proxy-4xqn5" Jul 15 05:12:30.182732 kubelet[2713]: I0715 05:12:30.182670 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efbe3dde-b744-4753-b7c9-7359bb0f35f3-xtables-lock\") pod \"kube-proxy-4xqn5\" (UID: \"efbe3dde-b744-4753-b7c9-7359bb0f35f3\") " pod="kube-system/kube-proxy-4xqn5" Jul 15 05:12:30.217227 systemd[1]: Created slice kubepods-besteffort-pod49f3e053_b2f1_4320_affa_2b96a224f0fb.slice - libcontainer container kubepods-besteffort-pod49f3e053_b2f1_4320_affa_2b96a224f0fb.slice. Jul 15 05:12:30.282989 kubelet[2713]: I0715 05:12:30.282887 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdgcg\" (UniqueName: \"kubernetes.io/projected/49f3e053-b2f1-4320-affa-2b96a224f0fb-kube-api-access-tdgcg\") pod \"tigera-operator-747864d56d-7qglh\" (UID: \"49f3e053-b2f1-4320-affa-2b96a224f0fb\") " pod="tigera-operator/tigera-operator-747864d56d-7qglh" Jul 15 05:12:30.282989 kubelet[2713]: I0715 05:12:30.282943 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49f3e053-b2f1-4320-affa-2b96a224f0fb-var-lib-calico\") pod \"tigera-operator-747864d56d-7qglh\" (UID: \"49f3e053-b2f1-4320-affa-2b96a224f0fb\") " pod="tigera-operator/tigera-operator-747864d56d-7qglh" Jul 15 05:12:30.435866 containerd[1563]: time="2025-07-15T05:12:30.435672611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4xqn5,Uid:efbe3dde-b744-4753-b7c9-7359bb0f35f3,Namespace:kube-system,Attempt:0,}" Jul 15 05:12:30.463729 containerd[1563]: time="2025-07-15T05:12:30.463673872Z" level=info msg="connecting to shim eafcb20fdef86125e398fee542bfe3dc954c3920916b997f946a275754ecc134" address="unix:///run/containerd/s/0a4feac26c3d729056101610628d8b44687241a6f18b859c43b0afd7af0fcd3a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:12:30.501601 systemd[1]: Started cri-containerd-eafcb20fdef86125e398fee542bfe3dc954c3920916b997f946a275754ecc134.scope - libcontainer container eafcb20fdef86125e398fee542bfe3dc954c3920916b997f946a275754ecc134. Jul 15 05:12:30.523070 containerd[1563]: time="2025-07-15T05:12:30.523010166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7qglh,Uid:49f3e053-b2f1-4320-affa-2b96a224f0fb,Namespace:tigera-operator,Attempt:0,}" Jul 15 05:12:30.536233 containerd[1563]: time="2025-07-15T05:12:30.536180956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4xqn5,Uid:efbe3dde-b744-4753-b7c9-7359bb0f35f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"eafcb20fdef86125e398fee542bfe3dc954c3920916b997f946a275754ecc134\"" Jul 15 05:12:30.540350 containerd[1563]: time="2025-07-15T05:12:30.539447609Z" level=info msg="CreateContainer within sandbox \"eafcb20fdef86125e398fee542bfe3dc954c3920916b997f946a275754ecc134\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:12:30.557390 containerd[1563]: time="2025-07-15T05:12:30.557303337Z" level=info msg="connecting to shim 661d6f57d1eb203d4d49b45412b2608a26b71e216b4c5ccea68a1e3e3eae0f49" address="unix:///run/containerd/s/511a15bbba026f1981ddfed0466632f1d2ffa9b27f5c1fd224dc0c214d482d60" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:12:30.558535 containerd[1563]: time="2025-07-15T05:12:30.558219601Z" level=info msg="Container bca7fa8c155c34dc3d6eaddcd87581cccdf16b7e5a4e89cdbbf0363793749752: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:12:30.583611 systemd[1]: Started cri-containerd-661d6f57d1eb203d4d49b45412b2608a26b71e216b4c5ccea68a1e3e3eae0f49.scope - libcontainer container 661d6f57d1eb203d4d49b45412b2608a26b71e216b4c5ccea68a1e3e3eae0f49. Jul 15 05:12:30.769750 containerd[1563]: time="2025-07-15T05:12:30.769597804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7qglh,Uid:49f3e053-b2f1-4320-affa-2b96a224f0fb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"661d6f57d1eb203d4d49b45412b2608a26b71e216b4c5ccea68a1e3e3eae0f49\"" Jul 15 05:12:30.771462 containerd[1563]: time="2025-07-15T05:12:30.771431705Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 05:12:30.771864 containerd[1563]: time="2025-07-15T05:12:30.771832945Z" level=info msg="CreateContainer within sandbox \"eafcb20fdef86125e398fee542bfe3dc954c3920916b997f946a275754ecc134\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bca7fa8c155c34dc3d6eaddcd87581cccdf16b7e5a4e89cdbbf0363793749752\"" Jul 15 05:12:30.772293 containerd[1563]: time="2025-07-15T05:12:30.772269652Z" level=info msg="StartContainer for \"bca7fa8c155c34dc3d6eaddcd87581cccdf16b7e5a4e89cdbbf0363793749752\"" Jul 15 05:12:30.774168 containerd[1563]: time="2025-07-15T05:12:30.774101458Z" level=info msg="connecting to shim bca7fa8c155c34dc3d6eaddcd87581cccdf16b7e5a4e89cdbbf0363793749752" address="unix:///run/containerd/s/0a4feac26c3d729056101610628d8b44687241a6f18b859c43b0afd7af0fcd3a" protocol=ttrpc version=3 Jul 15 05:12:30.805695 systemd[1]: Started cri-containerd-bca7fa8c155c34dc3d6eaddcd87581cccdf16b7e5a4e89cdbbf0363793749752.scope - libcontainer container bca7fa8c155c34dc3d6eaddcd87581cccdf16b7e5a4e89cdbbf0363793749752. Jul 15 05:12:30.969593 containerd[1563]: time="2025-07-15T05:12:30.969536414Z" level=info msg="StartContainer for \"bca7fa8c155c34dc3d6eaddcd87581cccdf16b7e5a4e89cdbbf0363793749752\" returns successfully" Jul 15 05:12:31.173911 kubelet[2713]: I0715 05:12:31.173773 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4xqn5" podStartSLOduration=1.173744067 podStartE2EDuration="1.173744067s" podCreationTimestamp="2025-07-15 05:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:12:31.173280761 +0000 UTC m=+8.215744819" watchObservedRunningTime="2025-07-15 05:12:31.173744067 +0000 UTC m=+8.216208115" Jul 15 05:12:32.148765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740703080.mount: Deactivated successfully. Jul 15 05:12:32.513914 containerd[1563]: time="2025-07-15T05:12:32.513776951Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:32.514607 containerd[1563]: time="2025-07-15T05:12:32.514575410Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 05:12:32.515649 containerd[1563]: time="2025-07-15T05:12:32.515601269Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:32.517653 containerd[1563]: time="2025-07-15T05:12:32.517608164Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:32.518170 containerd[1563]: time="2025-07-15T05:12:32.518139859Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.746675171s" Jul 15 05:12:32.518231 containerd[1563]: time="2025-07-15T05:12:32.518170176Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 05:12:32.519847 containerd[1563]: time="2025-07-15T05:12:32.519814224Z" level=info msg="CreateContainer within sandbox \"661d6f57d1eb203d4d49b45412b2608a26b71e216b4c5ccea68a1e3e3eae0f49\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 05:12:32.531855 containerd[1563]: time="2025-07-15T05:12:32.531795076Z" level=info msg="Container ea56a97bed937088644e465f4a0362a4e923427cc7350dcecc27409c2f0a1cb3: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:12:32.541914 containerd[1563]: time="2025-07-15T05:12:32.541855746Z" level=info msg="CreateContainer within sandbox \"661d6f57d1eb203d4d49b45412b2608a26b71e216b4c5ccea68a1e3e3eae0f49\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ea56a97bed937088644e465f4a0362a4e923427cc7350dcecc27409c2f0a1cb3\"" Jul 15 05:12:32.543577 containerd[1563]: time="2025-07-15T05:12:32.542554899Z" level=info msg="StartContainer for \"ea56a97bed937088644e465f4a0362a4e923427cc7350dcecc27409c2f0a1cb3\"" Jul 15 05:12:32.543577 containerd[1563]: time="2025-07-15T05:12:32.543486760Z" level=info msg="connecting to shim ea56a97bed937088644e465f4a0362a4e923427cc7350dcecc27409c2f0a1cb3" address="unix:///run/containerd/s/511a15bbba026f1981ddfed0466632f1d2ffa9b27f5c1fd224dc0c214d482d60" protocol=ttrpc version=3 Jul 15 05:12:32.608734 systemd[1]: Started cri-containerd-ea56a97bed937088644e465f4a0362a4e923427cc7350dcecc27409c2f0a1cb3.scope - libcontainer container ea56a97bed937088644e465f4a0362a4e923427cc7350dcecc27409c2f0a1cb3. Jul 15 05:12:32.644599 containerd[1563]: time="2025-07-15T05:12:32.644555854Z" level=info msg="StartContainer for \"ea56a97bed937088644e465f4a0362a4e923427cc7350dcecc27409c2f0a1cb3\" returns successfully" Jul 15 05:12:33.098382 kubelet[2713]: I0715 05:12:33.098237 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-7qglh" podStartSLOduration=1.350343543 podStartE2EDuration="3.098214266s" podCreationTimestamp="2025-07-15 05:12:30 +0000 UTC" firstStartedPulling="2025-07-15 05:12:30.770907253 +0000 UTC m=+7.813371301" lastFinishedPulling="2025-07-15 05:12:32.518777975 +0000 UTC m=+9.561242024" observedRunningTime="2025-07-15 05:12:33.098147991 +0000 UTC m=+10.140612039" watchObservedRunningTime="2025-07-15 05:12:33.098214266 +0000 UTC m=+10.140678314" Jul 15 05:12:40.694191 sudo[1774]: pam_unix(sudo:session): session closed for user root Jul 15 05:12:40.697001 sshd[1773]: Connection closed by 10.0.0.1 port 42222 Jul 15 05:12:40.698141 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jul 15 05:12:40.706341 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:42222.service: Deactivated successfully. Jul 15 05:12:40.710014 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:12:40.710324 systemd[1]: session-7.scope: Consumed 5.477s CPU time, 228.1M memory peak. Jul 15 05:12:40.712222 systemd-logind[1508]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:12:40.715311 systemd-logind[1508]: Removed session 7. Jul 15 05:12:48.387891 systemd[1]: Created slice kubepods-besteffort-poded69a17b_c431_4e4b_826f_abf058a34f46.slice - libcontainer container kubepods-besteffort-poded69a17b_c431_4e4b_826f_abf058a34f46.slice. Jul 15 05:12:48.412437 kubelet[2713]: I0715 05:12:48.411855 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ed69a17b-c431-4e4b-826f-abf058a34f46-typha-certs\") pod \"calico-typha-6bf574dc55-d7h52\" (UID: \"ed69a17b-c431-4e4b-826f-abf058a34f46\") " pod="calico-system/calico-typha-6bf574dc55-d7h52" Jul 15 05:12:48.412437 kubelet[2713]: I0715 05:12:48.411922 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed69a17b-c431-4e4b-826f-abf058a34f46-tigera-ca-bundle\") pod \"calico-typha-6bf574dc55-d7h52\" (UID: \"ed69a17b-c431-4e4b-826f-abf058a34f46\") " pod="calico-system/calico-typha-6bf574dc55-d7h52" Jul 15 05:12:48.412437 kubelet[2713]: I0715 05:12:48.411963 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhm98\" (UniqueName: \"kubernetes.io/projected/ed69a17b-c431-4e4b-826f-abf058a34f46-kube-api-access-jhm98\") pod \"calico-typha-6bf574dc55-d7h52\" (UID: \"ed69a17b-c431-4e4b-826f-abf058a34f46\") " pod="calico-system/calico-typha-6bf574dc55-d7h52" Jul 15 05:12:48.992755 containerd[1563]: time="2025-07-15T05:12:48.992694499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf574dc55-d7h52,Uid:ed69a17b-c431-4e4b-826f-abf058a34f46,Namespace:calico-system,Attempt:0,}" Jul 15 05:12:50.055897 systemd[1]: Created slice kubepods-besteffort-pod9f71d53d_8361_48a0_ba9e_ef5580c87b62.slice - libcontainer container kubepods-besteffort-pod9f71d53d_8361_48a0_ba9e_ef5580c87b62.slice. Jul 15 05:12:50.079615 containerd[1563]: time="2025-07-15T05:12:50.079428682Z" level=info msg="connecting to shim 82aac4ee3d3428b5e32d35946e577a110ff5d542f1e405bc08bcc9a00cd4c21c" address="unix:///run/containerd/s/8474472258ff4fbfba6610fb48348999474ad19045965d3541f2e0312ad1896c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:12:50.090433 kubelet[2713]: E0715 05:12:50.090345 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:12:50.122207 kubelet[2713]: I0715 05:12:50.121376 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-policysync\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.122736 kubelet[2713]: I0715 05:12:50.122631 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-var-lib-calico\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.122808 kubelet[2713]: I0715 05:12:50.122782 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b936ddb1-facc-4e6c-bca6-a08227d61e68-varrun\") pod \"csi-node-driver-2v6vx\" (UID: \"b936ddb1-facc-4e6c-bca6-a08227d61e68\") " pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:12:50.122869 kubelet[2713]: I0715 05:12:50.122804 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-cni-bin-dir\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.122923 kubelet[2713]: I0715 05:12:50.122877 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-var-run-calico\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.122959 kubelet[2713]: I0715 05:12:50.122935 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b936ddb1-facc-4e6c-bca6-a08227d61e68-registration-dir\") pod \"csi-node-driver-2v6vx\" (UID: \"b936ddb1-facc-4e6c-bca6-a08227d61e68\") " pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:12:50.122992 kubelet[2713]: I0715 05:12:50.122958 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b936ddb1-facc-4e6c-bca6-a08227d61e68-socket-dir\") pod \"csi-node-driver-2v6vx\" (UID: \"b936ddb1-facc-4e6c-bca6-a08227d61e68\") " pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:12:50.123051 kubelet[2713]: I0715 05:12:50.123020 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-xtables-lock\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.123127 kubelet[2713]: I0715 05:12:50.123097 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-cni-log-dir\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.123192 kubelet[2713]: I0715 05:12:50.123127 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-cni-net-dir\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.123240 kubelet[2713]: I0715 05:12:50.123196 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-lib-modules\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.123278 kubelet[2713]: I0715 05:12:50.123216 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9f71d53d-8361-48a0-ba9e-ef5580c87b62-node-certs\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.123386 kubelet[2713]: I0715 05:12:50.123294 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf98m\" (UniqueName: \"kubernetes.io/projected/b936ddb1-facc-4e6c-bca6-a08227d61e68-kube-api-access-xf98m\") pod \"csi-node-driver-2v6vx\" (UID: \"b936ddb1-facc-4e6c-bca6-a08227d61e68\") " pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:12:50.123386 kubelet[2713]: I0715 05:12:50.123366 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wml\" (UniqueName: \"kubernetes.io/projected/9f71d53d-8361-48a0-ba9e-ef5580c87b62-kube-api-access-k4wml\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.123506 kubelet[2713]: I0715 05:12:50.123438 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f71d53d-8361-48a0-ba9e-ef5580c87b62-tigera-ca-bundle\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.123552 kubelet[2713]: I0715 05:12:50.123460 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b936ddb1-facc-4e6c-bca6-a08227d61e68-kubelet-dir\") pod \"csi-node-driver-2v6vx\" (UID: \"b936ddb1-facc-4e6c-bca6-a08227d61e68\") " pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:12:50.123594 kubelet[2713]: I0715 05:12:50.123547 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9f71d53d-8361-48a0-ba9e-ef5580c87b62-flexvol-driver-host\") pod \"calico-node-6t5q8\" (UID: \"9f71d53d-8361-48a0-ba9e-ef5580c87b62\") " pod="calico-system/calico-node-6t5q8" Jul 15 05:12:50.129621 systemd[1]: Started cri-containerd-82aac4ee3d3428b5e32d35946e577a110ff5d542f1e405bc08bcc9a00cd4c21c.scope - libcontainer container 82aac4ee3d3428b5e32d35946e577a110ff5d542f1e405bc08bcc9a00cd4c21c. Jul 15 05:12:50.231998 kubelet[2713]: E0715 05:12:50.231830 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.232855 kubelet[2713]: W0715 05:12:50.232175 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.233867 kubelet[2713]: E0715 05:12:50.233736 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.234155 kubelet[2713]: E0715 05:12:50.233979 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.234155 kubelet[2713]: W0715 05:12:50.233997 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.234155 kubelet[2713]: E0715 05:12:50.234014 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.237117 kubelet[2713]: E0715 05:12:50.236181 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.237117 kubelet[2713]: W0715 05:12:50.236485 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.237117 kubelet[2713]: E0715 05:12:50.236754 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.238694 kubelet[2713]: E0715 05:12:50.238587 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.238694 kubelet[2713]: W0715 05:12:50.238608 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.238807 kubelet[2713]: E0715 05:12:50.238711 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.243065 containerd[1563]: time="2025-07-15T05:12:50.242993997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf574dc55-d7h52,Uid:ed69a17b-c431-4e4b-826f-abf058a34f46,Namespace:calico-system,Attempt:0,} returns sandbox id \"82aac4ee3d3428b5e32d35946e577a110ff5d542f1e405bc08bcc9a00cd4c21c\"" Jul 15 05:12:50.246662 kubelet[2713]: E0715 05:12:50.245668 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.246662 kubelet[2713]: W0715 05:12:50.245694 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.247581 containerd[1563]: time="2025-07-15T05:12:50.246539231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 05:12:50.247713 kubelet[2713]: E0715 05:12:50.247672 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.249136 kubelet[2713]: E0715 05:12:50.248516 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.249136 kubelet[2713]: W0715 05:12:50.248537 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.249136 kubelet[2713]: E0715 05:12:50.248643 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.253223 kubelet[2713]: E0715 05:12:50.250197 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.253223 kubelet[2713]: W0715 05:12:50.250233 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.253223 kubelet[2713]: E0715 05:12:50.250505 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.254205 kubelet[2713]: E0715 05:12:50.254171 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.254312 kubelet[2713]: W0715 05:12:50.254287 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.254548 kubelet[2713]: E0715 05:12:50.254512 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.254944 kubelet[2713]: E0715 05:12:50.254923 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.255054 kubelet[2713]: W0715 05:12:50.255031 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.255243 kubelet[2713]: E0715 05:12:50.255214 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.257797 kubelet[2713]: E0715 05:12:50.257690 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.257797 kubelet[2713]: W0715 05:12:50.257712 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.258000 kubelet[2713]: E0715 05:12:50.257986 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.260311 kubelet[2713]: E0715 05:12:50.260275 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.260311 kubelet[2713]: W0715 05:12:50.260298 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.261286 kubelet[2713]: E0715 05:12:50.260708 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.262628 kubelet[2713]: E0715 05:12:50.262095 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.262628 kubelet[2713]: W0715 05:12:50.262126 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.262628 kubelet[2713]: E0715 05:12:50.262184 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.264400 kubelet[2713]: E0715 05:12:50.264360 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.264400 kubelet[2713]: W0715 05:12:50.264388 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.264400 kubelet[2713]: E0715 05:12:50.264425 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.277612 kubelet[2713]: E0715 05:12:50.276928 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.277747 kubelet[2713]: W0715 05:12:50.277617 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.277747 kubelet[2713]: E0715 05:12:50.277646 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.317421 kubelet[2713]: E0715 05:12:50.316056 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:50.317421 kubelet[2713]: W0715 05:12:50.316089 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:50.317421 kubelet[2713]: E0715 05:12:50.316115 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:50.362781 containerd[1563]: time="2025-07-15T05:12:50.362696465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6t5q8,Uid:9f71d53d-8361-48a0-ba9e-ef5580c87b62,Namespace:calico-system,Attempt:0,}" Jul 15 05:12:50.826383 containerd[1563]: time="2025-07-15T05:12:50.826309432Z" level=info msg="connecting to shim ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285" address="unix:///run/containerd/s/61c59eed9f78ff2e9e95750abcedff52fe973b517f4f532f7cbd1560b491ac2e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:12:50.869708 systemd[1]: Started cri-containerd-ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285.scope - libcontainer container ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285. Jul 15 05:12:51.017819 containerd[1563]: time="2025-07-15T05:12:51.017746881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6t5q8,Uid:9f71d53d-8361-48a0-ba9e-ef5580c87b62,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285\"" Jul 15 05:12:52.051510 kubelet[2713]: E0715 05:12:52.051455 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:12:54.051265 kubelet[2713]: E0715 05:12:54.051206 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:12:55.289936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439191630.mount: Deactivated successfully. Jul 15 05:12:56.051566 kubelet[2713]: E0715 05:12:56.051503 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:12:56.130549 containerd[1563]: time="2025-07-15T05:12:56.130444730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:56.132633 containerd[1563]: time="2025-07-15T05:12:56.132383190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 05:12:56.136261 containerd[1563]: time="2025-07-15T05:12:56.136180502Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:56.139550 containerd[1563]: time="2025-07-15T05:12:56.139398014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:56.140123 containerd[1563]: time="2025-07-15T05:12:56.140065377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 5.89348518s" Jul 15 05:12:56.140123 containerd[1563]: time="2025-07-15T05:12:56.140109130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 05:12:56.143239 containerd[1563]: time="2025-07-15T05:12:56.143166953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 05:12:56.155768 containerd[1563]: time="2025-07-15T05:12:56.155707535Z" level=info msg="CreateContainer within sandbox \"82aac4ee3d3428b5e32d35946e577a110ff5d542f1e405bc08bcc9a00cd4c21c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 05:12:56.169753 containerd[1563]: time="2025-07-15T05:12:56.168648238Z" level=info msg="Container 48f4b0df00d76564f044fef9690680d04c237d36ef2a1c5f6b5759488a0289a3: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:12:56.184355 containerd[1563]: time="2025-07-15T05:12:56.184268182Z" level=info msg="CreateContainer within sandbox \"82aac4ee3d3428b5e32d35946e577a110ff5d542f1e405bc08bcc9a00cd4c21c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"48f4b0df00d76564f044fef9690680d04c237d36ef2a1c5f6b5759488a0289a3\"" Jul 15 05:12:56.185646 containerd[1563]: time="2025-07-15T05:12:56.185306933Z" level=info msg="StartContainer for \"48f4b0df00d76564f044fef9690680d04c237d36ef2a1c5f6b5759488a0289a3\"" Jul 15 05:12:56.187035 containerd[1563]: time="2025-07-15T05:12:56.186961541Z" level=info msg="connecting to shim 48f4b0df00d76564f044fef9690680d04c237d36ef2a1c5f6b5759488a0289a3" address="unix:///run/containerd/s/8474472258ff4fbfba6610fb48348999474ad19045965d3541f2e0312ad1896c" protocol=ttrpc version=3 Jul 15 05:12:56.223791 systemd[1]: Started cri-containerd-48f4b0df00d76564f044fef9690680d04c237d36ef2a1c5f6b5759488a0289a3.scope - libcontainer container 48f4b0df00d76564f044fef9690680d04c237d36ef2a1c5f6b5759488a0289a3. Jul 15 05:12:56.280659 containerd[1563]: time="2025-07-15T05:12:56.280529004Z" level=info msg="StartContainer for \"48f4b0df00d76564f044fef9690680d04c237d36ef2a1c5f6b5759488a0289a3\" returns successfully" Jul 15 05:12:57.163675 kubelet[2713]: I0715 05:12:57.163602 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bf574dc55-d7h52" podStartSLOduration=4.268218608 podStartE2EDuration="10.163576347s" podCreationTimestamp="2025-07-15 05:12:47 +0000 UTC" firstStartedPulling="2025-07-15 05:12:50.245951265 +0000 UTC m=+27.288415313" lastFinishedPulling="2025-07-15 05:12:56.141308984 +0000 UTC m=+33.183773052" observedRunningTime="2025-07-15 05:12:57.162832029 +0000 UTC m=+34.205296107" watchObservedRunningTime="2025-07-15 05:12:57.163576347 +0000 UTC m=+34.206040395" Jul 15 05:12:57.240866 kubelet[2713]: E0715 05:12:57.240800 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.240866 kubelet[2713]: W0715 05:12:57.240834 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.240866 kubelet[2713]: E0715 05:12:57.240860 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.241180 kubelet[2713]: E0715 05:12:57.241149 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.241180 kubelet[2713]: W0715 05:12:57.241167 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.241180 kubelet[2713]: E0715 05:12:57.241180 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.241472 kubelet[2713]: E0715 05:12:57.241451 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.241472 kubelet[2713]: W0715 05:12:57.241465 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.241563 kubelet[2713]: E0715 05:12:57.241478 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.241744 kubelet[2713]: E0715 05:12:57.241716 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.241744 kubelet[2713]: W0715 05:12:57.241729 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.241744 kubelet[2713]: E0715 05:12:57.241738 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.241939 kubelet[2713]: E0715 05:12:57.241913 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.241939 kubelet[2713]: W0715 05:12:57.241924 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.241939 kubelet[2713]: E0715 05:12:57.241931 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.242115 kubelet[2713]: E0715 05:12:57.242098 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.242115 kubelet[2713]: W0715 05:12:57.242108 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.242115 kubelet[2713]: E0715 05:12:57.242116 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.242346 kubelet[2713]: E0715 05:12:57.242316 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.242346 kubelet[2713]: W0715 05:12:57.242334 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.242459 kubelet[2713]: E0715 05:12:57.242348 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.242623 kubelet[2713]: E0715 05:12:57.242598 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.242623 kubelet[2713]: W0715 05:12:57.242615 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.242706 kubelet[2713]: E0715 05:12:57.242627 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.242992 kubelet[2713]: E0715 05:12:57.242950 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.242992 kubelet[2713]: W0715 05:12:57.242980 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.242992 kubelet[2713]: E0715 05:12:57.242992 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.243246 kubelet[2713]: E0715 05:12:57.243217 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.243246 kubelet[2713]: W0715 05:12:57.243234 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.243246 kubelet[2713]: E0715 05:12:57.243246 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.243518 kubelet[2713]: E0715 05:12:57.243497 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.243518 kubelet[2713]: W0715 05:12:57.243513 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.243594 kubelet[2713]: E0715 05:12:57.243527 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.243767 kubelet[2713]: E0715 05:12:57.243745 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.243767 kubelet[2713]: W0715 05:12:57.243762 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.243842 kubelet[2713]: E0715 05:12:57.243774 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.244078 kubelet[2713]: E0715 05:12:57.244043 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.244078 kubelet[2713]: W0715 05:12:57.244061 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.244078 kubelet[2713]: E0715 05:12:57.244075 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.244324 kubelet[2713]: E0715 05:12:57.244305 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.244324 kubelet[2713]: W0715 05:12:57.244321 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.244373 kubelet[2713]: E0715 05:12:57.244333 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.244591 kubelet[2713]: E0715 05:12:57.244573 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.244616 kubelet[2713]: W0715 05:12:57.244590 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.244616 kubelet[2713]: E0715 05:12:57.244603 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.271252 kubelet[2713]: E0715 05:12:57.271192 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.271252 kubelet[2713]: W0715 05:12:57.271222 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.271252 kubelet[2713]: E0715 05:12:57.271249 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.271504 kubelet[2713]: E0715 05:12:57.271496 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.271533 kubelet[2713]: W0715 05:12:57.271506 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.271533 kubelet[2713]: E0715 05:12:57.271523 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.271767 kubelet[2713]: E0715 05:12:57.271741 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.271767 kubelet[2713]: W0715 05:12:57.271752 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.271767 kubelet[2713]: E0715 05:12:57.271766 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.272049 kubelet[2713]: E0715 05:12:57.272006 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.272049 kubelet[2713]: W0715 05:12:57.272023 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.272049 kubelet[2713]: E0715 05:12:57.272046 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.272273 kubelet[2713]: E0715 05:12:57.272249 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.272273 kubelet[2713]: W0715 05:12:57.272261 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.272349 kubelet[2713]: E0715 05:12:57.272278 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.272518 kubelet[2713]: E0715 05:12:57.272483 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.272518 kubelet[2713]: W0715 05:12:57.272496 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.272518 kubelet[2713]: E0715 05:12:57.272511 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.272748 kubelet[2713]: E0715 05:12:57.272722 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.272748 kubelet[2713]: W0715 05:12:57.272734 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.272815 kubelet[2713]: E0715 05:12:57.272755 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.273159 kubelet[2713]: E0715 05:12:57.273115 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.273159 kubelet[2713]: W0715 05:12:57.273154 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.273243 kubelet[2713]: E0715 05:12:57.273186 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.273399 kubelet[2713]: E0715 05:12:57.273381 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.273399 kubelet[2713]: W0715 05:12:57.273392 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.273485 kubelet[2713]: E0715 05:12:57.273432 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.273585 kubelet[2713]: E0715 05:12:57.273568 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.273585 kubelet[2713]: W0715 05:12:57.273579 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.273653 kubelet[2713]: E0715 05:12:57.273607 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.273756 kubelet[2713]: E0715 05:12:57.273739 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.273756 kubelet[2713]: W0715 05:12:57.273750 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.273821 kubelet[2713]: E0715 05:12:57.273762 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.273988 kubelet[2713]: E0715 05:12:57.273960 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.273988 kubelet[2713]: W0715 05:12:57.273983 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.274055 kubelet[2713]: E0715 05:12:57.274001 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.274195 kubelet[2713]: E0715 05:12:57.274180 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.274195 kubelet[2713]: W0715 05:12:57.274190 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.274267 kubelet[2713]: E0715 05:12:57.274202 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.274505 kubelet[2713]: E0715 05:12:57.274485 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.274505 kubelet[2713]: W0715 05:12:57.274497 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.274591 kubelet[2713]: E0715 05:12:57.274514 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.274728 kubelet[2713]: E0715 05:12:57.274708 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.274728 kubelet[2713]: W0715 05:12:57.274720 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.274803 kubelet[2713]: E0715 05:12:57.274736 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.274954 kubelet[2713]: E0715 05:12:57.274935 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.274954 kubelet[2713]: W0715 05:12:57.274947 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.275043 kubelet[2713]: E0715 05:12:57.274963 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.275282 kubelet[2713]: E0715 05:12:57.275245 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.275282 kubelet[2713]: W0715 05:12:57.275265 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.275282 kubelet[2713]: E0715 05:12:57.275279 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:57.275537 kubelet[2713]: E0715 05:12:57.275516 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:57.275537 kubelet[2713]: W0715 05:12:57.275530 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:57.275614 kubelet[2713]: E0715 05:12:57.275542 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.051123 kubelet[2713]: E0715 05:12:58.051045 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:12:58.152279 kubelet[2713]: I0715 05:12:58.152249 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:12:58.251829 kubelet[2713]: E0715 05:12:58.251772 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.251829 kubelet[2713]: W0715 05:12:58.251803 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.251829 kubelet[2713]: E0715 05:12:58.251831 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.252373 kubelet[2713]: E0715 05:12:58.252064 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.252373 kubelet[2713]: W0715 05:12:58.252074 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.252373 kubelet[2713]: E0715 05:12:58.252085 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.252373 kubelet[2713]: E0715 05:12:58.252288 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.252373 kubelet[2713]: W0715 05:12:58.252297 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.252373 kubelet[2713]: E0715 05:12:58.252307 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.252619 kubelet[2713]: E0715 05:12:58.252595 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.252619 kubelet[2713]: W0715 05:12:58.252609 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.252675 kubelet[2713]: E0715 05:12:58.252621 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.252848 kubelet[2713]: E0715 05:12:58.252828 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.252848 kubelet[2713]: W0715 05:12:58.252841 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.252891 kubelet[2713]: E0715 05:12:58.252851 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.253083 kubelet[2713]: E0715 05:12:58.253066 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.253083 kubelet[2713]: W0715 05:12:58.253078 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.253166 kubelet[2713]: E0715 05:12:58.253088 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.253335 kubelet[2713]: E0715 05:12:58.253310 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.253335 kubelet[2713]: W0715 05:12:58.253323 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.253335 kubelet[2713]: E0715 05:12:58.253333 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.253575 kubelet[2713]: E0715 05:12:58.253553 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.253575 kubelet[2713]: W0715 05:12:58.253566 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.253645 kubelet[2713]: E0715 05:12:58.253576 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.253777 kubelet[2713]: E0715 05:12:58.253762 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.253777 kubelet[2713]: W0715 05:12:58.253774 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.253834 kubelet[2713]: E0715 05:12:58.253784 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.253991 kubelet[2713]: E0715 05:12:58.253974 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.253991 kubelet[2713]: W0715 05:12:58.253986 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.254057 kubelet[2713]: E0715 05:12:58.253996 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.254184 kubelet[2713]: E0715 05:12:58.254168 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.254184 kubelet[2713]: W0715 05:12:58.254182 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.254240 kubelet[2713]: E0715 05:12:58.254193 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.254430 kubelet[2713]: E0715 05:12:58.254378 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.254430 kubelet[2713]: W0715 05:12:58.254392 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.254430 kubelet[2713]: E0715 05:12:58.254401 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.254637 kubelet[2713]: E0715 05:12:58.254620 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.254637 kubelet[2713]: W0715 05:12:58.254632 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.254703 kubelet[2713]: E0715 05:12:58.254642 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.254830 kubelet[2713]: E0715 05:12:58.254814 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.254830 kubelet[2713]: W0715 05:12:58.254825 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.254899 kubelet[2713]: E0715 05:12:58.254834 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.255044 kubelet[2713]: E0715 05:12:58.255026 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.255044 kubelet[2713]: W0715 05:12:58.255037 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.255111 kubelet[2713]: E0715 05:12:58.255046 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.279293 kubelet[2713]: E0715 05:12:58.279247 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.279293 kubelet[2713]: W0715 05:12:58.279268 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.279293 kubelet[2713]: E0715 05:12:58.279287 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.279571 kubelet[2713]: E0715 05:12:58.279543 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.279571 kubelet[2713]: W0715 05:12:58.279558 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.279571 kubelet[2713]: E0715 05:12:58.279571 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.279842 kubelet[2713]: E0715 05:12:58.279816 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.279842 kubelet[2713]: W0715 05:12:58.279834 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.279901 kubelet[2713]: E0715 05:12:58.279860 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.280041 kubelet[2713]: E0715 05:12:58.280028 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.280041 kubelet[2713]: W0715 05:12:58.280037 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.280095 kubelet[2713]: E0715 05:12:58.280052 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.280242 kubelet[2713]: E0715 05:12:58.280223 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.280242 kubelet[2713]: W0715 05:12:58.280233 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.280292 kubelet[2713]: E0715 05:12:58.280246 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.280472 kubelet[2713]: E0715 05:12:58.280459 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.280472 kubelet[2713]: W0715 05:12:58.280469 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.280527 kubelet[2713]: E0715 05:12:58.280482 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.280753 kubelet[2713]: E0715 05:12:58.280735 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.280753 kubelet[2713]: W0715 05:12:58.280748 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.280815 kubelet[2713]: E0715 05:12:58.280763 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.280995 kubelet[2713]: E0715 05:12:58.280979 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.280995 kubelet[2713]: W0715 05:12:58.280991 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.281048 kubelet[2713]: E0715 05:12:58.281022 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.281191 kubelet[2713]: E0715 05:12:58.281174 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.281191 kubelet[2713]: W0715 05:12:58.281186 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.281247 kubelet[2713]: E0715 05:12:58.281213 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.281362 kubelet[2713]: E0715 05:12:58.281347 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.281362 kubelet[2713]: W0715 05:12:58.281357 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.281423 kubelet[2713]: E0715 05:12:58.281369 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.281571 kubelet[2713]: E0715 05:12:58.281555 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.281571 kubelet[2713]: W0715 05:12:58.281567 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.281621 kubelet[2713]: E0715 05:12:58.281580 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.281748 kubelet[2713]: E0715 05:12:58.281733 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.281748 kubelet[2713]: W0715 05:12:58.281743 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.281790 kubelet[2713]: E0715 05:12:58.281756 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.281981 kubelet[2713]: E0715 05:12:58.281964 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.281981 kubelet[2713]: W0715 05:12:58.281975 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.282043 kubelet[2713]: E0715 05:12:58.281989 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.282282 kubelet[2713]: E0715 05:12:58.282266 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.282282 kubelet[2713]: W0715 05:12:58.282279 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.282327 kubelet[2713]: E0715 05:12:58.282295 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.282490 kubelet[2713]: E0715 05:12:58.282477 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.282490 kubelet[2713]: W0715 05:12:58.282487 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.282555 kubelet[2713]: E0715 05:12:58.282499 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.282680 kubelet[2713]: E0715 05:12:58.282666 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.282680 kubelet[2713]: W0715 05:12:58.282675 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.282732 kubelet[2713]: E0715 05:12:58.282687 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.282953 kubelet[2713]: E0715 05:12:58.282913 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.282953 kubelet[2713]: W0715 05:12:58.282931 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.282953 kubelet[2713]: E0715 05:12:58.282948 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.283124 kubelet[2713]: E0715 05:12:58.283111 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:12:58.283124 kubelet[2713]: W0715 05:12:58.283122 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:12:58.283171 kubelet[2713]: E0715 05:12:58.283130 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:12:58.494726 containerd[1563]: time="2025-07-15T05:12:58.494644234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:58.500796 containerd[1563]: time="2025-07-15T05:12:58.500683334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 05:12:58.502994 containerd[1563]: time="2025-07-15T05:12:58.502934080Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:58.506742 containerd[1563]: time="2025-07-15T05:12:58.506298116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:12:58.507250 containerd[1563]: time="2025-07-15T05:12:58.507198036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.36255435s" Jul 15 05:12:58.507250 containerd[1563]: time="2025-07-15T05:12:58.507237440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 05:12:58.510465 containerd[1563]: time="2025-07-15T05:12:58.509829567Z" level=info msg="CreateContainer within sandbox \"ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 05:12:58.523635 containerd[1563]: time="2025-07-15T05:12:58.523570438Z" level=info msg="Container ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:12:58.750747 containerd[1563]: time="2025-07-15T05:12:58.750592810Z" level=info msg="CreateContainer within sandbox \"ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874\"" Jul 15 05:12:58.751358 containerd[1563]: time="2025-07-15T05:12:58.751317471Z" level=info msg="StartContainer for \"ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874\"" Jul 15 05:12:58.753064 containerd[1563]: time="2025-07-15T05:12:58.753025968Z" level=info msg="connecting to shim ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874" address="unix:///run/containerd/s/61c59eed9f78ff2e9e95750abcedff52fe973b517f4f532f7cbd1560b491ac2e" protocol=ttrpc version=3 Jul 15 05:12:58.786792 systemd[1]: Started cri-containerd-ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874.scope - libcontainer container ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874. Jul 15 05:12:58.879388 systemd[1]: cri-containerd-ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874.scope: Deactivated successfully. Jul 15 05:12:58.879886 systemd[1]: cri-containerd-ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874.scope: Consumed 44ms CPU time, 6.5M memory peak, 4.6M written to disk. Jul 15 05:12:58.883220 containerd[1563]: time="2025-07-15T05:12:58.883178878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874\" id:\"ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874\" pid:3402 exited_at:{seconds:1752556378 nanos:882604730}" Jul 15 05:12:59.281333 containerd[1563]: time="2025-07-15T05:12:59.281256668Z" level=info msg="received exit event container_id:\"ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874\" id:\"ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874\" pid:3402 exited_at:{seconds:1752556378 nanos:882604730}" Jul 15 05:12:59.283125 containerd[1563]: time="2025-07-15T05:12:59.283037922Z" level=info msg="StartContainer for \"ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874\" returns successfully" Jul 15 05:12:59.311666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed209f4bf0c6dd58ef7c0df2ff252302a93ab9eddeef27d6f78ffc4bf94f8874-rootfs.mount: Deactivated successfully. Jul 15 05:13:00.052112 kubelet[2713]: E0715 05:13:00.052048 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:13:02.051603 kubelet[2713]: E0715 05:13:02.051539 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:13:02.295402 containerd[1563]: time="2025-07-15T05:13:02.294376745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 05:13:04.051167 kubelet[2713]: E0715 05:13:04.051110 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:13:06.051711 kubelet[2713]: E0715 05:13:06.051652 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:13:07.059697 containerd[1563]: time="2025-07-15T05:13:07.059623496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:07.061052 containerd[1563]: time="2025-07-15T05:13:07.061023653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 05:13:07.063470 containerd[1563]: time="2025-07-15T05:13:07.063397538Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:07.065834 containerd[1563]: time="2025-07-15T05:13:07.065745473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:07.066261 containerd[1563]: time="2025-07-15T05:13:07.066230723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.771773928s" Jul 15 05:13:07.066261 containerd[1563]: time="2025-07-15T05:13:07.066259116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 05:13:07.068441 containerd[1563]: time="2025-07-15T05:13:07.068379996Z" level=info msg="CreateContainer within sandbox \"ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 05:13:07.079989 containerd[1563]: time="2025-07-15T05:13:07.079939164Z" level=info msg="Container 9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:07.093855 containerd[1563]: time="2025-07-15T05:13:07.093798328Z" level=info msg="CreateContainer within sandbox \"ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f\"" Jul 15 05:13:07.094343 containerd[1563]: time="2025-07-15T05:13:07.094301663Z" level=info msg="StartContainer for \"9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f\"" Jul 15 05:13:07.096139 containerd[1563]: time="2025-07-15T05:13:07.096109926Z" level=info msg="connecting to shim 9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f" address="unix:///run/containerd/s/61c59eed9f78ff2e9e95750abcedff52fe973b517f4f532f7cbd1560b491ac2e" protocol=ttrpc version=3 Jul 15 05:13:07.118570 systemd[1]: Started cri-containerd-9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f.scope - libcontainer container 9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f. Jul 15 05:13:07.168374 containerd[1563]: time="2025-07-15T05:13:07.168336275Z" level=info msg="StartContainer for \"9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f\" returns successfully" Jul 15 05:13:08.051329 kubelet[2713]: E0715 05:13:08.051246 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:13:09.005951 containerd[1563]: time="2025-07-15T05:13:09.005880607Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:13:09.010025 systemd[1]: cri-containerd-9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f.scope: Deactivated successfully. Jul 15 05:13:09.010480 systemd[1]: cri-containerd-9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f.scope: Consumed 647ms CPU time, 176.6M memory peak, 3.1M read from disk, 171.2M written to disk. Jul 15 05:13:09.012611 containerd[1563]: time="2025-07-15T05:13:09.012554177Z" level=info msg="received exit event container_id:\"9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f\" id:\"9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f\" pid:3465 exited_at:{seconds:1752556389 nanos:11864233}" Jul 15 05:13:09.012905 containerd[1563]: time="2025-07-15T05:13:09.012844662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f\" id:\"9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f\" pid:3465 exited_at:{seconds:1752556389 nanos:11864233}" Jul 15 05:13:09.024516 kubelet[2713]: I0715 05:13:09.024452 2713 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 05:13:09.044158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a0f07dc785ae5a3c4fd83d90906cb02312043d5f367b8b4ddc154019ae8105f-rootfs.mount: Deactivated successfully. Jul 15 05:13:09.081894 systemd[1]: Created slice kubepods-burstable-podb92442d1_6355_4dea_8fde_9117f56f0339.slice - libcontainer container kubepods-burstable-podb92442d1_6355_4dea_8fde_9117f56f0339.slice. Jul 15 05:13:09.092214 systemd[1]: Created slice kubepods-besteffort-podaa661948_8aca_4c83_8e3e_c2b4ecf37e4a.slice - libcontainer container kubepods-besteffort-podaa661948_8aca_4c83_8e3e_c2b4ecf37e4a.slice. Jul 15 05:13:09.101974 systemd[1]: Created slice kubepods-besteffort-pod5b96ba32_0771_4e06_b517_802d100be4c8.slice - libcontainer container kubepods-besteffort-pod5b96ba32_0771_4e06_b517_802d100be4c8.slice. Jul 15 05:13:09.122116 kubelet[2713]: I0715 05:13:09.122051 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7cg\" (UniqueName: \"kubernetes.io/projected/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-kube-api-access-bc7cg\") pod \"whisker-545c68f4f4-tnhxk\" (UID: \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\") " pod="calico-system/whisker-545c68f4f4-tnhxk" Jul 15 05:13:09.122116 kubelet[2713]: I0715 05:13:09.122094 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnrp6\" (UniqueName: \"kubernetes.io/projected/b92442d1-6355-4dea-8fde-9117f56f0339-kube-api-access-tnrp6\") pod \"coredns-668d6bf9bc-xfv2c\" (UID: \"b92442d1-6355-4dea-8fde-9117f56f0339\") " pod="kube-system/coredns-668d6bf9bc-xfv2c" Jul 15 05:13:09.122116 kubelet[2713]: I0715 05:13:09.122121 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff83fb40-4d11-48a6-9cd9-48b99ded8970-calico-apiserver-certs\") pod \"calico-apiserver-bc7876d79-h8t8t\" (UID: \"ff83fb40-4d11-48a6-9cd9-48b99ded8970\") " pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" Jul 15 05:13:09.122770 kubelet[2713]: I0715 05:13:09.122143 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b96ba32-0771-4e06-b517-802d100be4c8-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-phzb2\" (UID: \"5b96ba32-0771-4e06-b517-802d100be4c8\") " pod="calico-system/goldmane-768f4c5c69-phzb2" Jul 15 05:13:09.122770 kubelet[2713]: I0715 05:13:09.122163 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-whisker-ca-bundle\") pod \"whisker-545c68f4f4-tnhxk\" (UID: \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\") " pod="calico-system/whisker-545c68f4f4-tnhxk" Jul 15 05:13:09.122770 kubelet[2713]: I0715 05:13:09.122179 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5b96ba32-0771-4e06-b517-802d100be4c8-goldmane-key-pair\") pod \"goldmane-768f4c5c69-phzb2\" (UID: \"5b96ba32-0771-4e06-b517-802d100be4c8\") " pod="calico-system/goldmane-768f4c5c69-phzb2" Jul 15 05:13:09.122770 kubelet[2713]: I0715 05:13:09.122203 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-whisker-backend-key-pair\") pod \"whisker-545c68f4f4-tnhxk\" (UID: \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\") " pod="calico-system/whisker-545c68f4f4-tnhxk" Jul 15 05:13:09.122770 kubelet[2713]: I0715 05:13:09.122222 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5cl5\" (UniqueName: \"kubernetes.io/projected/1fcf869b-4236-4d45-bdd7-01e1ea38a06c-kube-api-access-d5cl5\") pod \"calico-apiserver-bc7876d79-7ql6g\" (UID: \"1fcf869b-4236-4d45-bdd7-01e1ea38a06c\") " pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" Jul 15 05:13:09.122896 kubelet[2713]: I0715 05:13:09.122244 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvrm\" (UniqueName: \"kubernetes.io/projected/ff83fb40-4d11-48a6-9cd9-48b99ded8970-kube-api-access-rdvrm\") pod \"calico-apiserver-bc7876d79-h8t8t\" (UID: \"ff83fb40-4d11-48a6-9cd9-48b99ded8970\") " pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" Jul 15 05:13:09.122896 kubelet[2713]: I0715 05:13:09.122267 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7a25b3b-0fb7-4bf7-8050-6cc5c3635878-tigera-ca-bundle\") pod \"calico-kube-controllers-9fbb49d58-4nnjz\" (UID: \"a7a25b3b-0fb7-4bf7-8050-6cc5c3635878\") " pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" Jul 15 05:13:09.122896 kubelet[2713]: I0715 05:13:09.122293 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pctzr\" (UniqueName: \"kubernetes.io/projected/a7a25b3b-0fb7-4bf7-8050-6cc5c3635878-kube-api-access-pctzr\") pod \"calico-kube-controllers-9fbb49d58-4nnjz\" (UID: \"a7a25b3b-0fb7-4bf7-8050-6cc5c3635878\") " pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" Jul 15 05:13:09.122896 kubelet[2713]: I0715 05:13:09.122315 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7dmc\" (UniqueName: \"kubernetes.io/projected/5b96ba32-0771-4e06-b517-802d100be4c8-kube-api-access-d7dmc\") pod \"goldmane-768f4c5c69-phzb2\" (UID: \"5b96ba32-0771-4e06-b517-802d100be4c8\") " pod="calico-system/goldmane-768f4c5c69-phzb2" Jul 15 05:13:09.122896 kubelet[2713]: I0715 05:13:09.122336 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b92442d1-6355-4dea-8fde-9117f56f0339-config-volume\") pod \"coredns-668d6bf9bc-xfv2c\" (UID: \"b92442d1-6355-4dea-8fde-9117f56f0339\") " pod="kube-system/coredns-668d6bf9bc-xfv2c" Jul 15 05:13:09.123025 kubelet[2713]: I0715 05:13:09.122379 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b96ba32-0771-4e06-b517-802d100be4c8-config\") pod \"goldmane-768f4c5c69-phzb2\" (UID: \"5b96ba32-0771-4e06-b517-802d100be4c8\") " pod="calico-system/goldmane-768f4c5c69-phzb2" Jul 15 05:13:09.123025 kubelet[2713]: I0715 05:13:09.122404 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fec24b87-b14c-4e24-8756-b676ee924b20-config-volume\") pod \"coredns-668d6bf9bc-8smjf\" (UID: \"fec24b87-b14c-4e24-8756-b676ee924b20\") " pod="kube-system/coredns-668d6bf9bc-8smjf" Jul 15 05:13:09.123025 kubelet[2713]: I0715 05:13:09.122477 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bq42\" (UniqueName: \"kubernetes.io/projected/fec24b87-b14c-4e24-8756-b676ee924b20-kube-api-access-8bq42\") pod \"coredns-668d6bf9bc-8smjf\" (UID: \"fec24b87-b14c-4e24-8756-b676ee924b20\") " pod="kube-system/coredns-668d6bf9bc-8smjf" Jul 15 05:13:09.123025 kubelet[2713]: I0715 05:13:09.122503 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1fcf869b-4236-4d45-bdd7-01e1ea38a06c-calico-apiserver-certs\") pod \"calico-apiserver-bc7876d79-7ql6g\" (UID: \"1fcf869b-4236-4d45-bdd7-01e1ea38a06c\") " pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" Jul 15 05:13:09.291430 systemd[1]: Created slice kubepods-besteffort-poda7a25b3b_0fb7_4bf7_8050_6cc5c3635878.slice - libcontainer container kubepods-besteffort-poda7a25b3b_0fb7_4bf7_8050_6cc5c3635878.slice. Jul 15 05:13:09.336121 containerd[1563]: time="2025-07-15T05:13:09.336067124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fbb49d58-4nnjz,Uid:a7a25b3b-0fb7-4bf7-8050-6cc5c3635878,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:09.339135 systemd[1]: Created slice kubepods-burstable-podfec24b87_b14c_4e24_8756_b676ee924b20.slice - libcontainer container kubepods-burstable-podfec24b87_b14c_4e24_8756_b676ee924b20.slice. Jul 15 05:13:09.342539 containerd[1563]: time="2025-07-15T05:13:09.342064545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8smjf,Uid:fec24b87-b14c-4e24-8756-b676ee924b20,Namespace:kube-system,Attempt:0,}" Jul 15 05:13:09.344781 systemd[1]: Created slice kubepods-besteffort-podff83fb40_4d11_48a6_9cd9_48b99ded8970.slice - libcontainer container kubepods-besteffort-podff83fb40_4d11_48a6_9cd9_48b99ded8970.slice. Jul 15 05:13:09.347900 containerd[1563]: time="2025-07-15T05:13:09.347770009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-h8t8t,Uid:ff83fb40-4d11-48a6-9cd9-48b99ded8970,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:13:09.350021 systemd[1]: Created slice kubepods-besteffort-pod1fcf869b_4236_4d45_bdd7_01e1ea38a06c.slice - libcontainer container kubepods-besteffort-pod1fcf869b_4236_4d45_bdd7_01e1ea38a06c.slice. Jul 15 05:13:09.352795 containerd[1563]: time="2025-07-15T05:13:09.352760141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-7ql6g,Uid:1fcf869b-4236-4d45-bdd7-01e1ea38a06c,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:13:09.584292 containerd[1563]: time="2025-07-15T05:13:09.584129095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-545c68f4f4-tnhxk,Uid:aa661948-8aca-4c83-8e3e-c2b4ecf37e4a,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:09.584449 containerd[1563]: time="2025-07-15T05:13:09.584128945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfv2c,Uid:b92442d1-6355-4dea-8fde-9117f56f0339,Namespace:kube-system,Attempt:0,}" Jul 15 05:13:09.585100 containerd[1563]: time="2025-07-15T05:13:09.585035977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-phzb2,Uid:5b96ba32-0771-4e06-b517-802d100be4c8,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:09.939195 containerd[1563]: time="2025-07-15T05:13:09.939017816Z" level=error msg="Failed to destroy network for sandbox \"c09d7b4de35e93b08cbe62d12d5d9e52f00530f6845378ef3ec51d8169446523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.942650 containerd[1563]: time="2025-07-15T05:13:09.942573918Z" level=error msg="Failed to destroy network for sandbox \"44a49690a0a293eef7a64b62b99f897d7feb7b1d8a7218c2e761e46ced29b767\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.946300 containerd[1563]: time="2025-07-15T05:13:09.945214932Z" level=error msg="Failed to destroy network for sandbox \"e327098c1f3d185148f7bb6595447ddb41f55d40d2a8789a585819d6d9455f5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.958737 containerd[1563]: time="2025-07-15T05:13:09.958675616Z" level=error msg="Failed to destroy network for sandbox \"a0851073e77b6097221d3e68338491458aed5607280630b18cce3804bf0acde3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.964619 containerd[1563]: time="2025-07-15T05:13:09.964539978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8smjf,Uid:fec24b87-b14c-4e24-8756-b676ee924b20,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0851073e77b6097221d3e68338491458aed5607280630b18cce3804bf0acde3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.964987 containerd[1563]: time="2025-07-15T05:13:09.964557280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-h8t8t,Uid:ff83fb40-4d11-48a6-9cd9-48b99ded8970,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09d7b4de35e93b08cbe62d12d5d9e52f00530f6845378ef3ec51d8169446523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.965147 containerd[1563]: time="2025-07-15T05:13:09.965114987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fbb49d58-4nnjz,Uid:a7a25b3b-0fb7-4bf7-8050-6cc5c3635878,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e327098c1f3d185148f7bb6595447ddb41f55d40d2a8789a585819d6d9455f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.965302 containerd[1563]: time="2025-07-15T05:13:09.965270428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-545c68f4f4-tnhxk,Uid:aa661948-8aca-4c83-8e3e-c2b4ecf37e4a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a49690a0a293eef7a64b62b99f897d7feb7b1d8a7218c2e761e46ced29b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.981355 containerd[1563]: time="2025-07-15T05:13:09.980243199Z" level=error msg="Failed to destroy network for sandbox \"b4a0c3663579b14d8e766f12d20aa0e44b0984daa39fcd1ef77d851b63328dec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.981989 kubelet[2713]: E0715 05:13:09.981911 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a49690a0a293eef7a64b62b99f897d7feb7b1d8a7218c2e761e46ced29b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.982069 kubelet[2713]: E0715 05:13:09.981985 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09d7b4de35e93b08cbe62d12d5d9e52f00530f6845378ef3ec51d8169446523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.982069 kubelet[2713]: E0715 05:13:09.982037 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09d7b4de35e93b08cbe62d12d5d9e52f00530f6845378ef3ec51d8169446523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" Jul 15 05:13:09.982069 kubelet[2713]: E0715 05:13:09.982037 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a49690a0a293eef7a64b62b99f897d7feb7b1d8a7218c2e761e46ced29b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-545c68f4f4-tnhxk" Jul 15 05:13:09.982069 kubelet[2713]: E0715 05:13:09.982063 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09d7b4de35e93b08cbe62d12d5d9e52f00530f6845378ef3ec51d8169446523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" Jul 15 05:13:09.982342 kubelet[2713]: E0715 05:13:09.982070 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a49690a0a293eef7a64b62b99f897d7feb7b1d8a7218c2e761e46ced29b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-545c68f4f4-tnhxk" Jul 15 05:13:09.982342 kubelet[2713]: E0715 05:13:09.982133 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc7876d79-h8t8t_calico-apiserver(ff83fb40-4d11-48a6-9cd9-48b99ded8970)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc7876d79-h8t8t_calico-apiserver(ff83fb40-4d11-48a6-9cd9-48b99ded8970)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c09d7b4de35e93b08cbe62d12d5d9e52f00530f6845378ef3ec51d8169446523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" podUID="ff83fb40-4d11-48a6-9cd9-48b99ded8970" Jul 15 05:13:09.982342 kubelet[2713]: E0715 05:13:09.982136 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-545c68f4f4-tnhxk_calico-system(aa661948-8aca-4c83-8e3e-c2b4ecf37e4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-545c68f4f4-tnhxk_calico-system(aa661948-8aca-4c83-8e3e-c2b4ecf37e4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44a49690a0a293eef7a64b62b99f897d7feb7b1d8a7218c2e761e46ced29b767\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-545c68f4f4-tnhxk" podUID="aa661948-8aca-4c83-8e3e-c2b4ecf37e4a" Jul 15 05:13:09.982562 kubelet[2713]: E0715 05:13:09.981939 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0851073e77b6097221d3e68338491458aed5607280630b18cce3804bf0acde3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.982562 kubelet[2713]: E0715 05:13:09.982210 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0851073e77b6097221d3e68338491458aed5607280630b18cce3804bf0acde3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8smjf" Jul 15 05:13:09.982562 kubelet[2713]: E0715 05:13:09.982353 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0851073e77b6097221d3e68338491458aed5607280630b18cce3804bf0acde3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8smjf" Jul 15 05:13:09.982691 kubelet[2713]: E0715 05:13:09.982387 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8smjf_kube-system(fec24b87-b14c-4e24-8756-b676ee924b20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8smjf_kube-system(fec24b87-b14c-4e24-8756-b676ee924b20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0851073e77b6097221d3e68338491458aed5607280630b18cce3804bf0acde3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8smjf" podUID="fec24b87-b14c-4e24-8756-b676ee924b20" Jul 15 05:13:09.982691 kubelet[2713]: E0715 05:13:09.982182 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e327098c1f3d185148f7bb6595447ddb41f55d40d2a8789a585819d6d9455f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.982691 kubelet[2713]: E0715 05:13:09.982457 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e327098c1f3d185148f7bb6595447ddb41f55d40d2a8789a585819d6d9455f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" Jul 15 05:13:09.982823 kubelet[2713]: E0715 05:13:09.982475 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e327098c1f3d185148f7bb6595447ddb41f55d40d2a8789a585819d6d9455f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" Jul 15 05:13:09.982823 kubelet[2713]: E0715 05:13:09.982509 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9fbb49d58-4nnjz_calico-system(a7a25b3b-0fb7-4bf7-8050-6cc5c3635878)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9fbb49d58-4nnjz_calico-system(a7a25b3b-0fb7-4bf7-8050-6cc5c3635878)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e327098c1f3d185148f7bb6595447ddb41f55d40d2a8789a585819d6d9455f5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" podUID="a7a25b3b-0fb7-4bf7-8050-6cc5c3635878" Jul 15 05:13:09.988201 containerd[1563]: time="2025-07-15T05:13:09.988052000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfv2c,Uid:b92442d1-6355-4dea-8fde-9117f56f0339,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4a0c3663579b14d8e766f12d20aa0e44b0984daa39fcd1ef77d851b63328dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.989072 kubelet[2713]: E0715 05:13:09.989010 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4a0c3663579b14d8e766f12d20aa0e44b0984daa39fcd1ef77d851b63328dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.989142 kubelet[2713]: E0715 05:13:09.989094 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4a0c3663579b14d8e766f12d20aa0e44b0984daa39fcd1ef77d851b63328dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xfv2c" Jul 15 05:13:09.989142 kubelet[2713]: E0715 05:13:09.989125 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4a0c3663579b14d8e766f12d20aa0e44b0984daa39fcd1ef77d851b63328dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xfv2c" Jul 15 05:13:09.989245 kubelet[2713]: E0715 05:13:09.989177 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xfv2c_kube-system(b92442d1-6355-4dea-8fde-9117f56f0339)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xfv2c_kube-system(b92442d1-6355-4dea-8fde-9117f56f0339)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4a0c3663579b14d8e766f12d20aa0e44b0984daa39fcd1ef77d851b63328dec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xfv2c" podUID="b92442d1-6355-4dea-8fde-9117f56f0339" Jul 15 05:13:09.993776 containerd[1563]: time="2025-07-15T05:13:09.993697551Z" level=error msg="Failed to destroy network for sandbox \"a5fb2d7dc6d31cd20e6ff5dd05431a479924120d2ab84913305fd5eb35154de4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.994905 containerd[1563]: time="2025-07-15T05:13:09.994859842Z" level=error msg="Failed to destroy network for sandbox \"4da11f93ac4dcddc21cb5f0468df797847140db80e571b720ebc6f8fd07a5580\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.996701 containerd[1563]: time="2025-07-15T05:13:09.996622438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-7ql6g,Uid:1fcf869b-4236-4d45-bdd7-01e1ea38a06c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5fb2d7dc6d31cd20e6ff5dd05431a479924120d2ab84913305fd5eb35154de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.997160 kubelet[2713]: E0715 05:13:09.997059 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5fb2d7dc6d31cd20e6ff5dd05431a479924120d2ab84913305fd5eb35154de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.997244 kubelet[2713]: E0715 05:13:09.997194 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5fb2d7dc6d31cd20e6ff5dd05431a479924120d2ab84913305fd5eb35154de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" Jul 15 05:13:09.997244 kubelet[2713]: E0715 05:13:09.997216 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5fb2d7dc6d31cd20e6ff5dd05431a479924120d2ab84913305fd5eb35154de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" Jul 15 05:13:09.997317 kubelet[2713]: E0715 05:13:09.997254 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc7876d79-7ql6g_calico-apiserver(1fcf869b-4236-4d45-bdd7-01e1ea38a06c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc7876d79-7ql6g_calico-apiserver(1fcf869b-4236-4d45-bdd7-01e1ea38a06c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5fb2d7dc6d31cd20e6ff5dd05431a479924120d2ab84913305fd5eb35154de4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" podUID="1fcf869b-4236-4d45-bdd7-01e1ea38a06c" Jul 15 05:13:09.999483 containerd[1563]: time="2025-07-15T05:13:09.999455553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-phzb2,Uid:5b96ba32-0771-4e06-b517-802d100be4c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4da11f93ac4dcddc21cb5f0468df797847140db80e571b720ebc6f8fd07a5580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.999676 kubelet[2713]: E0715 05:13:09.999582 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4da11f93ac4dcddc21cb5f0468df797847140db80e571b720ebc6f8fd07a5580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:09.999676 kubelet[2713]: E0715 05:13:09.999614 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4da11f93ac4dcddc21cb5f0468df797847140db80e571b720ebc6f8fd07a5580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-phzb2" Jul 15 05:13:09.999676 kubelet[2713]: E0715 05:13:09.999643 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4da11f93ac4dcddc21cb5f0468df797847140db80e571b720ebc6f8fd07a5580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-phzb2" Jul 15 05:13:09.999782 kubelet[2713]: E0715 05:13:09.999671 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-phzb2_calico-system(5b96ba32-0771-4e06-b517-802d100be4c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-phzb2_calico-system(5b96ba32-0771-4e06-b517-802d100be4c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4da11f93ac4dcddc21cb5f0468df797847140db80e571b720ebc6f8fd07a5580\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-phzb2" podUID="5b96ba32-0771-4e06-b517-802d100be4c8" Jul 15 05:13:10.044742 systemd[1]: run-netns-cni\x2d127a7bda\x2d4ddc\x2d5161\x2dead2\x2dcf0efeb6623a.mount: Deactivated successfully. Jul 15 05:13:10.061825 systemd[1]: Created slice kubepods-besteffort-podb936ddb1_facc_4e6c_bca6_a08227d61e68.slice - libcontainer container kubepods-besteffort-podb936ddb1_facc_4e6c_bca6_a08227d61e68.slice. Jul 15 05:13:10.064852 containerd[1563]: time="2025-07-15T05:13:10.064799410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2v6vx,Uid:b936ddb1-facc-4e6c-bca6-a08227d61e68,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:10.130253 containerd[1563]: time="2025-07-15T05:13:10.130171860Z" level=error msg="Failed to destroy network for sandbox \"031a7d8164dde548931a34a6bf53c8d5552f0a27484b29c3d2fdbd8e249a4fd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:10.133595 systemd[1]: run-netns-cni\x2d4067cbbd\x2d6e90\x2dc6cf\x2dbeed\x2d4d34c742fcf1.mount: Deactivated successfully. Jul 15 05:13:10.135008 containerd[1563]: time="2025-07-15T05:13:10.134318520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2v6vx,Uid:b936ddb1-facc-4e6c-bca6-a08227d61e68,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"031a7d8164dde548931a34a6bf53c8d5552f0a27484b29c3d2fdbd8e249a4fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:10.135149 kubelet[2713]: E0715 05:13:10.134787 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031a7d8164dde548931a34a6bf53c8d5552f0a27484b29c3d2fdbd8e249a4fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:10.135149 kubelet[2713]: E0715 05:13:10.134859 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031a7d8164dde548931a34a6bf53c8d5552f0a27484b29c3d2fdbd8e249a4fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:13:10.135149 kubelet[2713]: E0715 05:13:10.134887 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031a7d8164dde548931a34a6bf53c8d5552f0a27484b29c3d2fdbd8e249a4fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:13:10.135694 kubelet[2713]: E0715 05:13:10.134948 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2v6vx_calico-system(b936ddb1-facc-4e6c-bca6-a08227d61e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2v6vx_calico-system(b936ddb1-facc-4e6c-bca6-a08227d61e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"031a7d8164dde548931a34a6bf53c8d5552f0a27484b29c3d2fdbd8e249a4fd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:13:10.323461 containerd[1563]: time="2025-07-15T05:13:10.321567723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 05:13:10.541493 kubelet[2713]: I0715 05:13:10.541395 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:13:20.866686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount398524675.mount: Deactivated successfully. Jul 15 05:13:21.052494 containerd[1563]: time="2025-07-15T05:13:21.052397090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-545c68f4f4-tnhxk,Uid:aa661948-8aca-4c83-8e3e-c2b4ecf37e4a,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:23.052273 containerd[1563]: time="2025-07-15T05:13:23.052188567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-7ql6g,Uid:1fcf869b-4236-4d45-bdd7-01e1ea38a06c,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:13:23.053030 containerd[1563]: time="2025-07-15T05:13:23.052192445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fbb49d58-4nnjz,Uid:a7a25b3b-0fb7-4bf7-8050-6cc5c3635878,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:23.571436 containerd[1563]: time="2025-07-15T05:13:23.571327114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:23.601862 containerd[1563]: time="2025-07-15T05:13:23.601716139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 05:13:23.610865 containerd[1563]: time="2025-07-15T05:13:23.609127590Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:23.621579 containerd[1563]: time="2025-07-15T05:13:23.621520252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:23.622163 containerd[1563]: time="2025-07-15T05:13:23.622117993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 13.300474298s" Jul 15 05:13:23.622163 containerd[1563]: time="2025-07-15T05:13:23.622157289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 05:13:23.653643 containerd[1563]: time="2025-07-15T05:13:23.653600733Z" level=info msg="CreateContainer within sandbox \"ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 05:13:23.696448 containerd[1563]: time="2025-07-15T05:13:23.695702670Z" level=info msg="Container 2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:23.710960 containerd[1563]: time="2025-07-15T05:13:23.710900310Z" level=error msg="Failed to destroy network for sandbox \"d0524a5a2695656f6200f4b4ebb51f2d7898e64e31109616d127853a2432016d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.712741 containerd[1563]: time="2025-07-15T05:13:23.712596393Z" level=error msg="Failed to destroy network for sandbox \"7ba6a3e9f4323dd305cab8711955b1ec27644e8a5de527dd8bd4bab93e8da507\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.712992 containerd[1563]: time="2025-07-15T05:13:23.712871617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-7ql6g,Uid:1fcf869b-4236-4d45-bdd7-01e1ea38a06c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0524a5a2695656f6200f4b4ebb51f2d7898e64e31109616d127853a2432016d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.713207 kubelet[2713]: E0715 05:13:23.713162 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0524a5a2695656f6200f4b4ebb51f2d7898e64e31109616d127853a2432016d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.713634 kubelet[2713]: E0715 05:13:23.713233 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0524a5a2695656f6200f4b4ebb51f2d7898e64e31109616d127853a2432016d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" Jul 15 05:13:23.713634 kubelet[2713]: E0715 05:13:23.713257 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0524a5a2695656f6200f4b4ebb51f2d7898e64e31109616d127853a2432016d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" Jul 15 05:13:23.713634 kubelet[2713]: E0715 05:13:23.713304 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc7876d79-7ql6g_calico-apiserver(1fcf869b-4236-4d45-bdd7-01e1ea38a06c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc7876d79-7ql6g_calico-apiserver(1fcf869b-4236-4d45-bdd7-01e1ea38a06c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0524a5a2695656f6200f4b4ebb51f2d7898e64e31109616d127853a2432016d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" podUID="1fcf869b-4236-4d45-bdd7-01e1ea38a06c" Jul 15 05:13:23.714978 containerd[1563]: time="2025-07-15T05:13:23.714867091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-545c68f4f4-tnhxk,Uid:aa661948-8aca-4c83-8e3e-c2b4ecf37e4a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba6a3e9f4323dd305cab8711955b1ec27644e8a5de527dd8bd4bab93e8da507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.715262 kubelet[2713]: E0715 05:13:23.715232 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba6a3e9f4323dd305cab8711955b1ec27644e8a5de527dd8bd4bab93e8da507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.715300 kubelet[2713]: E0715 05:13:23.715286 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba6a3e9f4323dd305cab8711955b1ec27644e8a5de527dd8bd4bab93e8da507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-545c68f4f4-tnhxk" Jul 15 05:13:23.715333 kubelet[2713]: E0715 05:13:23.715304 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba6a3e9f4323dd305cab8711955b1ec27644e8a5de527dd8bd4bab93e8da507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-545c68f4f4-tnhxk" Jul 15 05:13:23.715375 kubelet[2713]: E0715 05:13:23.715341 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-545c68f4f4-tnhxk_calico-system(aa661948-8aca-4c83-8e3e-c2b4ecf37e4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-545c68f4f4-tnhxk_calico-system(aa661948-8aca-4c83-8e3e-c2b4ecf37e4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ba6a3e9f4323dd305cab8711955b1ec27644e8a5de527dd8bd4bab93e8da507\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-545c68f4f4-tnhxk" podUID="aa661948-8aca-4c83-8e3e-c2b4ecf37e4a" Jul 15 05:13:23.720050 containerd[1563]: time="2025-07-15T05:13:23.719851470Z" level=info msg="CreateContainer within sandbox \"ec37d1952c8d90985e833034e870d1089a286b2c05db0da1f2c9e4ff48007285\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91\"" Jul 15 05:13:23.720482 containerd[1563]: time="2025-07-15T05:13:23.720397269Z" level=info msg="StartContainer for \"2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91\"" Jul 15 05:13:23.725860 containerd[1563]: time="2025-07-15T05:13:23.725806052Z" level=error msg="Failed to destroy network for sandbox \"770513b46edc43a463cb1d298d89816455a1e775f13e888910f665c52656f5ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.729037 containerd[1563]: time="2025-07-15T05:13:23.729005184Z" level=info msg="connecting to shim 2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91" address="unix:///run/containerd/s/61c59eed9f78ff2e9e95750abcedff52fe973b517f4f532f7cbd1560b491ac2e" protocol=ttrpc version=3 Jul 15 05:13:23.813565 systemd[1]: Started cri-containerd-2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91.scope - libcontainer container 2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91. Jul 15 05:13:23.824907 containerd[1563]: time="2025-07-15T05:13:23.824728066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fbb49d58-4nnjz,Uid:a7a25b3b-0fb7-4bf7-8050-6cc5c3635878,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"770513b46edc43a463cb1d298d89816455a1e775f13e888910f665c52656f5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.825225 kubelet[2713]: E0715 05:13:23.824994 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"770513b46edc43a463cb1d298d89816455a1e775f13e888910f665c52656f5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:23.825225 kubelet[2713]: E0715 05:13:23.825052 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"770513b46edc43a463cb1d298d89816455a1e775f13e888910f665c52656f5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" Jul 15 05:13:23.825225 kubelet[2713]: E0715 05:13:23.825076 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"770513b46edc43a463cb1d298d89816455a1e775f13e888910f665c52656f5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" Jul 15 05:13:23.825614 kubelet[2713]: E0715 05:13:23.825122 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9fbb49d58-4nnjz_calico-system(a7a25b3b-0fb7-4bf7-8050-6cc5c3635878)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9fbb49d58-4nnjz_calico-system(a7a25b3b-0fb7-4bf7-8050-6cc5c3635878)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"770513b46edc43a463cb1d298d89816455a1e775f13e888910f665c52656f5ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" podUID="a7a25b3b-0fb7-4bf7-8050-6cc5c3635878" Jul 15 05:13:23.985285 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 05:13:23.986115 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 05:13:24.034648 containerd[1563]: time="2025-07-15T05:13:24.034589903Z" level=info msg="StartContainer for \"2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91\" returns successfully" Jul 15 05:13:24.052751 containerd[1563]: time="2025-07-15T05:13:24.052693391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-h8t8t,Uid:ff83fb40-4d11-48a6-9cd9-48b99ded8970,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:13:24.053388 containerd[1563]: time="2025-07-15T05:13:24.053358030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfv2c,Uid:b92442d1-6355-4dea-8fde-9117f56f0339,Namespace:kube-system,Attempt:0,}" Jul 15 05:13:24.053462 containerd[1563]: time="2025-07-15T05:13:24.053397086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-phzb2,Uid:5b96ba32-0771-4e06-b517-802d100be4c8,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:24.053525 containerd[1563]: time="2025-07-15T05:13:24.053365535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2v6vx,Uid:b936ddb1-facc-4e6c-bca6-a08227d61e68,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:24.410201 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:39248.service - OpenSSH per-connection server daemon (10.0.0.1:39248). Jul 15 05:13:24.504594 containerd[1563]: time="2025-07-15T05:13:24.504519623Z" level=error msg="Failed to destroy network for sandbox \"7fd3e66acd29ed8ac36806326b807e1e4acadda4f43dfcc26fdfcbbaf58dfbf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.504923 containerd[1563]: time="2025-07-15T05:13:24.504707288Z" level=error msg="Failed to destroy network for sandbox \"09a1b7b13d00e373596a801f16cafaa3a7833b8a1892f51ef68dc3d4df6f87dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.508716 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 39248 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:24.512533 sshd-session[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:24.523246 containerd[1563]: time="2025-07-15T05:13:24.523179711Z" level=error msg="Failed to destroy network for sandbox \"2731980bc87bfcfdadab211d97a170631f4a3b6cb8ad3b5d489b248fac881fc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.525108 containerd[1563]: time="2025-07-15T05:13:24.525051402Z" level=error msg="Failed to destroy network for sandbox \"cc4d9ac9d785e10e86c7944ba58762c617525894c85fac239455d9f22458654d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.561475 systemd-logind[1508]: New session 8 of user core. Jul 15 05:13:24.568681 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:13:24.572194 systemd[1]: run-netns-cni\x2d1b2a90e5\x2d0a0c\x2da453\x2d1d7e\x2d42fd6e0a2eff.mount: Deactivated successfully. Jul 15 05:13:24.572324 systemd[1]: run-netns-cni\x2d4dc8a2d6\x2df644\x2d8c2b\x2d9c61\x2d82884201e67b.mount: Deactivated successfully. Jul 15 05:13:24.572440 systemd[1]: run-netns-cni\x2dc87aa76a\x2d616e\x2d05f6\x2dc208\x2ddd5e93f31869.mount: Deactivated successfully. Jul 15 05:13:24.616008 containerd[1563]: time="2025-07-15T05:13:24.615921033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfv2c,Uid:b92442d1-6355-4dea-8fde-9117f56f0339,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd3e66acd29ed8ac36806326b807e1e4acadda4f43dfcc26fdfcbbaf58dfbf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.616318 kubelet[2713]: E0715 05:13:24.616269 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd3e66acd29ed8ac36806326b807e1e4acadda4f43dfcc26fdfcbbaf58dfbf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.617234 kubelet[2713]: E0715 05:13:24.616380 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd3e66acd29ed8ac36806326b807e1e4acadda4f43dfcc26fdfcbbaf58dfbf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xfv2c" Jul 15 05:13:24.617234 kubelet[2713]: E0715 05:13:24.616453 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd3e66acd29ed8ac36806326b807e1e4acadda4f43dfcc26fdfcbbaf58dfbf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xfv2c" Jul 15 05:13:24.617234 kubelet[2713]: E0715 05:13:24.616552 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xfv2c_kube-system(b92442d1-6355-4dea-8fde-9117f56f0339)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xfv2c_kube-system(b92442d1-6355-4dea-8fde-9117f56f0339)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fd3e66acd29ed8ac36806326b807e1e4acadda4f43dfcc26fdfcbbaf58dfbf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xfv2c" podUID="b92442d1-6355-4dea-8fde-9117f56f0339" Jul 15 05:13:24.662456 containerd[1563]: time="2025-07-15T05:13:24.662220776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-h8t8t,Uid:ff83fb40-4d11-48a6-9cd9-48b99ded8970,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a1b7b13d00e373596a801f16cafaa3a7833b8a1892f51ef68dc3d4df6f87dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.663078 kubelet[2713]: E0715 05:13:24.663006 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a1b7b13d00e373596a801f16cafaa3a7833b8a1892f51ef68dc3d4df6f87dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.663174 kubelet[2713]: E0715 05:13:24.663096 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a1b7b13d00e373596a801f16cafaa3a7833b8a1892f51ef68dc3d4df6f87dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" Jul 15 05:13:24.663174 kubelet[2713]: E0715 05:13:24.663125 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a1b7b13d00e373596a801f16cafaa3a7833b8a1892f51ef68dc3d4df6f87dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" Jul 15 05:13:24.663243 kubelet[2713]: E0715 05:13:24.663187 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc7876d79-h8t8t_calico-apiserver(ff83fb40-4d11-48a6-9cd9-48b99ded8970)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc7876d79-h8t8t_calico-apiserver(ff83fb40-4d11-48a6-9cd9-48b99ded8970)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09a1b7b13d00e373596a801f16cafaa3a7833b8a1892f51ef68dc3d4df6f87dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" podUID="ff83fb40-4d11-48a6-9cd9-48b99ded8970" Jul 15 05:13:24.728937 containerd[1563]: time="2025-07-15T05:13:24.728875755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91\" id:\"927f55f7bdd567b8adfdbbc52bd363c4319f1ea2acae1aa5e8d0eb49d4082da7\" pid:4074 exit_status:1 exited_at:{seconds:1752556404 nanos:728505487}" Jul 15 05:13:24.777986 containerd[1563]: time="2025-07-15T05:13:24.777828294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-phzb2,Uid:5b96ba32-0771-4e06-b517-802d100be4c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2731980bc87bfcfdadab211d97a170631f4a3b6cb8ad3b5d489b248fac881fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.778226 kubelet[2713]: E0715 05:13:24.778134 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2731980bc87bfcfdadab211d97a170631f4a3b6cb8ad3b5d489b248fac881fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.778673 kubelet[2713]: E0715 05:13:24.778296 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2731980bc87bfcfdadab211d97a170631f4a3b6cb8ad3b5d489b248fac881fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-phzb2" Jul 15 05:13:24.778673 kubelet[2713]: E0715 05:13:24.778330 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2731980bc87bfcfdadab211d97a170631f4a3b6cb8ad3b5d489b248fac881fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-phzb2" Jul 15 05:13:24.778673 kubelet[2713]: E0715 05:13:24.778390 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-phzb2_calico-system(5b96ba32-0771-4e06-b517-802d100be4c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-phzb2_calico-system(5b96ba32-0771-4e06-b517-802d100be4c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2731980bc87bfcfdadab211d97a170631f4a3b6cb8ad3b5d489b248fac881fc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-phzb2" podUID="5b96ba32-0771-4e06-b517-802d100be4c8" Jul 15 05:13:24.825483 containerd[1563]: time="2025-07-15T05:13:24.824640111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2v6vx,Uid:b936ddb1-facc-4e6c-bca6-a08227d61e68,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4d9ac9d785e10e86c7944ba58762c617525894c85fac239455d9f22458654d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.825951 kubelet[2713]: E0715 05:13:24.825881 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4d9ac9d785e10e86c7944ba58762c617525894c85fac239455d9f22458654d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:24.826092 kubelet[2713]: E0715 05:13:24.825968 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4d9ac9d785e10e86c7944ba58762c617525894c85fac239455d9f22458654d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:13:24.826092 kubelet[2713]: E0715 05:13:24.825992 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4d9ac9d785e10e86c7944ba58762c617525894c85fac239455d9f22458654d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2v6vx" Jul 15 05:13:24.826092 kubelet[2713]: E0715 05:13:24.826039 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2v6vx_calico-system(b936ddb1-facc-4e6c-bca6-a08227d61e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2v6vx_calico-system(b936ddb1-facc-4e6c-bca6-a08227d61e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc4d9ac9d785e10e86c7944ba58762c617525894c85fac239455d9f22458654d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2v6vx" podUID="b936ddb1-facc-4e6c-bca6-a08227d61e68" Jul 15 05:13:25.052119 containerd[1563]: time="2025-07-15T05:13:25.051962917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8smjf,Uid:fec24b87-b14c-4e24-8756-b676ee924b20,Namespace:kube-system,Attempt:0,}" Jul 15 05:13:25.126296 kubelet[2713]: I0715 05:13:25.126195 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6t5q8" podStartSLOduration=3.518348239 podStartE2EDuration="36.126159312s" podCreationTimestamp="2025-07-15 05:12:49 +0000 UTC" firstStartedPulling="2025-07-15 05:12:51.020013411 +0000 UTC m=+28.062477459" lastFinishedPulling="2025-07-15 05:13:23.627824484 +0000 UTC m=+60.670288532" observedRunningTime="2025-07-15 05:13:25.125875161 +0000 UTC m=+62.168339209" watchObservedRunningTime="2025-07-15 05:13:25.126159312 +0000 UTC m=+62.168623360" Jul 15 05:13:25.330102 containerd[1563]: time="2025-07-15T05:13:25.329905979Z" level=error msg="Failed to destroy network for sandbox \"1d5a7ee5984faca18ee30cfd77571a04487d86a3b0d3bccc742226768467f02f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:25.332865 systemd[1]: run-netns-cni\x2d2c91c467\x2de8db\x2d48aa\x2dc298\x2d03827423b80d.mount: Deactivated successfully. Jul 15 05:13:25.418386 containerd[1563]: time="2025-07-15T05:13:25.418222810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8smjf,Uid:fec24b87-b14c-4e24-8756-b676ee924b20,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d5a7ee5984faca18ee30cfd77571a04487d86a3b0d3bccc742226768467f02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:25.418675 kubelet[2713]: E0715 05:13:25.418640 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d5a7ee5984faca18ee30cfd77571a04487d86a3b0d3bccc742226768467f02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:13:25.418750 kubelet[2713]: E0715 05:13:25.418716 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d5a7ee5984faca18ee30cfd77571a04487d86a3b0d3bccc742226768467f02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8smjf" Jul 15 05:13:25.418750 kubelet[2713]: E0715 05:13:25.418743 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d5a7ee5984faca18ee30cfd77571a04487d86a3b0d3bccc742226768467f02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8smjf" Jul 15 05:13:25.418818 kubelet[2713]: E0715 05:13:25.418793 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8smjf_kube-system(fec24b87-b14c-4e24-8756-b676ee924b20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8smjf_kube-system(fec24b87-b14c-4e24-8756-b676ee924b20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d5a7ee5984faca18ee30cfd77571a04487d86a3b0d3bccc742226768467f02f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8smjf" podUID="fec24b87-b14c-4e24-8756-b676ee924b20" Jul 15 05:13:25.616638 sshd[4057]: Connection closed by 10.0.0.1 port 39248 Jul 15 05:13:25.617086 sshd-session[3971]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:25.622622 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:39248.service: Deactivated successfully. Jul 15 05:13:25.625968 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:13:25.629475 systemd-logind[1508]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:13:25.630934 systemd-logind[1508]: Removed session 8. Jul 15 05:13:25.675805 containerd[1563]: time="2025-07-15T05:13:25.675747171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91\" id:\"6eb85c2d603934be785c3c702756ebaac2244f5df910dea97736830f726f9224\" pid:4136 exit_status:1 exited_at:{seconds:1752556405 nanos:675352616}" Jul 15 05:13:26.166277 kubelet[2713]: I0715 05:13:26.166218 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-whisker-backend-key-pair\") pod \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\" (UID: \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\") " Jul 15 05:13:26.166277 kubelet[2713]: I0715 05:13:26.166273 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7cg\" (UniqueName: \"kubernetes.io/projected/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-kube-api-access-bc7cg\") pod \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\" (UID: \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\") " Jul 15 05:13:26.166277 kubelet[2713]: I0715 05:13:26.166292 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-whisker-ca-bundle\") pod \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\" (UID: \"aa661948-8aca-4c83-8e3e-c2b4ecf37e4a\") " Jul 15 05:13:26.185866 kubelet[2713]: I0715 05:13:26.185773 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aa661948-8aca-4c83-8e3e-c2b4ecf37e4a" (UID: "aa661948-8aca-4c83-8e3e-c2b4ecf37e4a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 05:13:26.191765 kubelet[2713]: I0715 05:13:26.191682 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aa661948-8aca-4c83-8e3e-c2b4ecf37e4a" (UID: "aa661948-8aca-4c83-8e3e-c2b4ecf37e4a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 05:13:26.191765 kubelet[2713]: I0715 05:13:26.191694 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-kube-api-access-bc7cg" (OuterVolumeSpecName: "kube-api-access-bc7cg") pod "aa661948-8aca-4c83-8e3e-c2b4ecf37e4a" (UID: "aa661948-8aca-4c83-8e3e-c2b4ecf37e4a"). InnerVolumeSpecName "kube-api-access-bc7cg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:13:26.192676 systemd[1]: var-lib-kubelet-pods-aa661948\x2d8aca\x2d4c83\x2d8e3e\x2dc2b4ecf37e4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbc7cg.mount: Deactivated successfully. Jul 15 05:13:26.192832 systemd[1]: var-lib-kubelet-pods-aa661948\x2d8aca\x2d4c83\x2d8e3e\x2dc2b4ecf37e4a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 05:13:26.267281 kubelet[2713]: I0715 05:13:26.267158 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 05:13:26.267281 kubelet[2713]: I0715 05:13:26.267222 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bc7cg\" (UniqueName: \"kubernetes.io/projected/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-kube-api-access-bc7cg\") on node \"localhost\" DevicePath \"\"" Jul 15 05:13:26.267281 kubelet[2713]: I0715 05:13:26.267237 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 05:13:26.573352 systemd[1]: Removed slice kubepods-besteffort-podaa661948_8aca_4c83_8e3e_c2b4ecf37e4a.slice - libcontainer container kubepods-besteffort-podaa661948_8aca_4c83_8e3e_c2b4ecf37e4a.slice. Jul 15 05:13:28.647839 systemd[1]: Created slice kubepods-besteffort-poda9a007dc_45f5_4021_8119_2ad9adcfe26f.slice - libcontainer container kubepods-besteffort-poda9a007dc_45f5_4021_8119_2ad9adcfe26f.slice. Jul 15 05:13:28.682194 kubelet[2713]: I0715 05:13:28.682030 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kt66\" (UniqueName: \"kubernetes.io/projected/a9a007dc-45f5-4021-8119-2ad9adcfe26f-kube-api-access-6kt66\") pod \"whisker-5b874f976b-2kbvl\" (UID: \"a9a007dc-45f5-4021-8119-2ad9adcfe26f\") " pod="calico-system/whisker-5b874f976b-2kbvl" Jul 15 05:13:28.682194 kubelet[2713]: I0715 05:13:28.682085 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a9a007dc-45f5-4021-8119-2ad9adcfe26f-whisker-backend-key-pair\") pod \"whisker-5b874f976b-2kbvl\" (UID: \"a9a007dc-45f5-4021-8119-2ad9adcfe26f\") " pod="calico-system/whisker-5b874f976b-2kbvl" Jul 15 05:13:28.682194 kubelet[2713]: I0715 05:13:28.682111 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a007dc-45f5-4021-8119-2ad9adcfe26f-whisker-ca-bundle\") pod \"whisker-5b874f976b-2kbvl\" (UID: \"a9a007dc-45f5-4021-8119-2ad9adcfe26f\") " pod="calico-system/whisker-5b874f976b-2kbvl" Jul 15 05:13:28.981923 containerd[1563]: time="2025-07-15T05:13:28.953840663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b874f976b-2kbvl,Uid:a9a007dc-45f5-4021-8119-2ad9adcfe26f,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:29.054191 kubelet[2713]: I0715 05:13:29.054103 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa661948-8aca-4c83-8e3e-c2b4ecf37e4a" path="/var/lib/kubelet/pods/aa661948-8aca-4c83-8e3e-c2b4ecf37e4a/volumes" Jul 15 05:13:29.424960 systemd-networkd[1492]: cali4f9cbfd7ae7: Link UP Jul 15 05:13:29.425229 systemd-networkd[1492]: cali4f9cbfd7ae7: Gained carrier Jul 15 05:13:29.481572 containerd[1563]: 2025-07-15 05:13:29.239 [INFO][4176] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:13:29.481572 containerd[1563]: 2025-07-15 05:13:29.264 [INFO][4176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b874f976b--2kbvl-eth0 whisker-5b874f976b- calico-system a9a007dc-45f5-4021-8119-2ad9adcfe26f 966 0 2025-07-15 05:13:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b874f976b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b874f976b-2kbvl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4f9cbfd7ae7 [] [] }} ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Namespace="calico-system" Pod="whisker-5b874f976b-2kbvl" WorkloadEndpoint="localhost-k8s-whisker--5b874f976b--2kbvl-" Jul 15 05:13:29.481572 containerd[1563]: 2025-07-15 05:13:29.266 [INFO][4176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Namespace="calico-system" Pod="whisker-5b874f976b-2kbvl" WorkloadEndpoint="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" Jul 15 05:13:29.481572 containerd[1563]: 2025-07-15 05:13:29.345 [INFO][4191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" HandleID="k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Workload="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.346 [INFO][4191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" HandleID="k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Workload="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b874f976b-2kbvl", "timestamp":"2025-07-15 05:13:29.345132154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.346 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.346 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.347 [INFO][4191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.355 [INFO][4191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" host="localhost" Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.366 [INFO][4191] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.372 [INFO][4191] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.375 [INFO][4191] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.377 [INFO][4191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:29.481824 containerd[1563]: 2025-07-15 05:13:29.377 [INFO][4191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" host="localhost" Jul 15 05:13:29.482119 containerd[1563]: 2025-07-15 05:13:29.378 [INFO][4191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e Jul 15 05:13:29.482119 containerd[1563]: 2025-07-15 05:13:29.402 [INFO][4191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" host="localhost" Jul 15 05:13:29.482119 containerd[1563]: 2025-07-15 05:13:29.409 [INFO][4191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" host="localhost" Jul 15 05:13:29.482119 containerd[1563]: 2025-07-15 05:13:29.409 [INFO][4191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" host="localhost" Jul 15 05:13:29.482119 containerd[1563]: 2025-07-15 05:13:29.409 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:13:29.482119 containerd[1563]: 2025-07-15 05:13:29.409 [INFO][4191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" HandleID="k8s-pod-network.53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Workload="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" Jul 15 05:13:29.482371 containerd[1563]: 2025-07-15 05:13:29.415 [INFO][4176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Namespace="calico-system" Pod="whisker-5b874f976b-2kbvl" WorkloadEndpoint="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b874f976b--2kbvl-eth0", GenerateName:"whisker-5b874f976b-", Namespace:"calico-system", SelfLink:"", UID:"a9a007dc-45f5-4021-8119-2ad9adcfe26f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b874f976b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b874f976b-2kbvl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4f9cbfd7ae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:29.482371 containerd[1563]: 2025-07-15 05:13:29.415 [INFO][4176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Namespace="calico-system" Pod="whisker-5b874f976b-2kbvl" WorkloadEndpoint="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" Jul 15 05:13:29.482515 containerd[1563]: 2025-07-15 05:13:29.415 [INFO][4176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f9cbfd7ae7 ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Namespace="calico-system" Pod="whisker-5b874f976b-2kbvl" WorkloadEndpoint="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" Jul 15 05:13:29.482515 containerd[1563]: 2025-07-15 05:13:29.425 [INFO][4176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Namespace="calico-system" Pod="whisker-5b874f976b-2kbvl" WorkloadEndpoint="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" Jul 15 05:13:29.482577 containerd[1563]: 2025-07-15 05:13:29.426 [INFO][4176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Namespace="calico-system" Pod="whisker-5b874f976b-2kbvl" WorkloadEndpoint="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b874f976b--2kbvl-eth0", GenerateName:"whisker-5b874f976b-", Namespace:"calico-system", SelfLink:"", UID:"a9a007dc-45f5-4021-8119-2ad9adcfe26f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b874f976b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e", Pod:"whisker-5b874f976b-2kbvl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4f9cbfd7ae7", MAC:"2a:eb:dc:4b:67:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:29.482666 containerd[1563]: 2025-07-15 05:13:29.474 [INFO][4176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" Namespace="calico-system" Pod="whisker-5b874f976b-2kbvl" WorkloadEndpoint="localhost-k8s-whisker--5b874f976b--2kbvl-eth0" Jul 15 05:13:29.540441 containerd[1563]: time="2025-07-15T05:13:29.540356593Z" level=info msg="connecting to shim 53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e" address="unix:///run/containerd/s/05509c0ccf3d06c99c8f24e38c4496fd6c0281b27d241e0b43c7960505f2004a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:29.572125 systemd[1]: Started cri-containerd-53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e.scope - libcontainer container 53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e. Jul 15 05:13:29.590319 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:13:29.693200 containerd[1563]: time="2025-07-15T05:13:29.693001904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b874f976b-2kbvl,Uid:a9a007dc-45f5-4021-8119-2ad9adcfe26f,Namespace:calico-system,Attempt:0,} returns sandbox id \"53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e\"" Jul 15 05:13:29.699344 containerd[1563]: time="2025-07-15T05:13:29.699303916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 05:13:30.476659 systemd-networkd[1492]: vxlan.calico: Link UP Jul 15 05:13:30.476670 systemd-networkd[1492]: vxlan.calico: Gained carrier Jul 15 05:13:30.632934 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:53910.service - OpenSSH per-connection server daemon (10.0.0.1:53910). Jul 15 05:13:30.727756 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 53910 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:30.733260 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:30.752268 systemd-logind[1508]: New session 9 of user core. Jul 15 05:13:30.760562 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:13:30.944265 sshd[4438]: Connection closed by 10.0.0.1 port 53910 Jul 15 05:13:30.944657 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:30.950754 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:53910.service: Deactivated successfully. Jul 15 05:13:30.954612 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:13:30.955697 systemd-logind[1508]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:13:30.956989 systemd-logind[1508]: Removed session 9. Jul 15 05:13:31.181718 systemd-networkd[1492]: cali4f9cbfd7ae7: Gained IPv6LL Jul 15 05:13:32.205604 systemd-networkd[1492]: vxlan.calico: Gained IPv6LL Jul 15 05:13:33.390923 containerd[1563]: time="2025-07-15T05:13:33.390836604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:33.391645 containerd[1563]: time="2025-07-15T05:13:33.391583843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 05:13:33.393069 containerd[1563]: time="2025-07-15T05:13:33.393014177Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:33.396327 containerd[1563]: time="2025-07-15T05:13:33.396230089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:33.396662 containerd[1563]: time="2025-07-15T05:13:33.396603979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.697254806s" Jul 15 05:13:33.396703 containerd[1563]: time="2025-07-15T05:13:33.396659737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 05:13:33.400078 containerd[1563]: time="2025-07-15T05:13:33.399708107Z" level=info msg="CreateContainer within sandbox \"53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 05:13:33.409121 containerd[1563]: time="2025-07-15T05:13:33.409072398Z" level=info msg="Container a7390f301ec651ff909848d2c80a0cc33b87f161bd4028238dc8eeacae33b185: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:33.420039 containerd[1563]: time="2025-07-15T05:13:33.419959612Z" level=info msg="CreateContainer within sandbox \"53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a7390f301ec651ff909848d2c80a0cc33b87f161bd4028238dc8eeacae33b185\"" Jul 15 05:13:33.420576 containerd[1563]: time="2025-07-15T05:13:33.420538427Z" level=info msg="StartContainer for \"a7390f301ec651ff909848d2c80a0cc33b87f161bd4028238dc8eeacae33b185\"" Jul 15 05:13:33.421952 containerd[1563]: time="2025-07-15T05:13:33.421921761Z" level=info msg="connecting to shim a7390f301ec651ff909848d2c80a0cc33b87f161bd4028238dc8eeacae33b185" address="unix:///run/containerd/s/05509c0ccf3d06c99c8f24e38c4496fd6c0281b27d241e0b43c7960505f2004a" protocol=ttrpc version=3 Jul 15 05:13:33.472699 systemd[1]: Started cri-containerd-a7390f301ec651ff909848d2c80a0cc33b87f161bd4028238dc8eeacae33b185.scope - libcontainer container a7390f301ec651ff909848d2c80a0cc33b87f161bd4028238dc8eeacae33b185. Jul 15 05:13:33.668316 containerd[1563]: time="2025-07-15T05:13:33.668137339Z" level=info msg="StartContainer for \"a7390f301ec651ff909848d2c80a0cc33b87f161bd4028238dc8eeacae33b185\" returns successfully" Jul 15 05:13:33.674266 containerd[1563]: time="2025-07-15T05:13:33.673684701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 05:13:35.051953 containerd[1563]: time="2025-07-15T05:13:35.051797530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-7ql6g,Uid:1fcf869b-4236-4d45-bdd7-01e1ea38a06c,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:13:35.631906 systemd-networkd[1492]: cali0497d2c64cd: Link UP Jul 15 05:13:35.632324 systemd-networkd[1492]: cali0497d2c64cd: Gained carrier Jul 15 05:13:35.967612 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Jul 15 05:13:36.219762 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:36.221325 containerd[1563]: 2025-07-15 05:13:35.302 [INFO][4516] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0 calico-apiserver-bc7876d79- calico-apiserver 1fcf869b-4236-4d45-bdd7-01e1ea38a06c 823 0 2025-07-15 05:12:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bc7876d79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bc7876d79-7ql6g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0497d2c64cd [] [] }} ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-7ql6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-" Jul 15 05:13:36.221325 containerd[1563]: 2025-07-15 05:13:35.302 [INFO][4516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-7ql6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" Jul 15 05:13:36.221325 containerd[1563]: 2025-07-15 05:13:35.387 [INFO][4530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" HandleID="k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Workload="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.387 [INFO][4530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" HandleID="k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Workload="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000427f70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bc7876d79-7ql6g", "timestamp":"2025-07-15 05:13:35.3871703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.387 [INFO][4530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.387 [INFO][4530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.387 [INFO][4530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.394 [INFO][4530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" host="localhost" Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.400 [INFO][4530] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.405 [INFO][4530] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.407 [INFO][4530] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.410 [INFO][4530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:36.223304 containerd[1563]: 2025-07-15 05:13:35.410 [INFO][4530] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" host="localhost" Jul 15 05:13:36.222943 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:36.224640 containerd[1563]: 2025-07-15 05:13:35.412 [INFO][4530] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7 Jul 15 05:13:36.224640 containerd[1563]: 2025-07-15 05:13:35.421 [INFO][4530] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" host="localhost" Jul 15 05:13:36.224640 containerd[1563]: 2025-07-15 05:13:35.626 [INFO][4530] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" host="localhost" Jul 15 05:13:36.224640 containerd[1563]: 2025-07-15 05:13:35.626 [INFO][4530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" host="localhost" Jul 15 05:13:36.224640 containerd[1563]: 2025-07-15 05:13:35.626 [INFO][4530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:13:36.224640 containerd[1563]: 2025-07-15 05:13:35.626 [INFO][4530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" HandleID="k8s-pod-network.4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Workload="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" Jul 15 05:13:36.224866 containerd[1563]: 2025-07-15 05:13:35.629 [INFO][4516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-7ql6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0", GenerateName:"calico-apiserver-bc7876d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fcf869b-4236-4d45-bdd7-01e1ea38a06c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc7876d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bc7876d79-7ql6g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0497d2c64cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:36.224932 containerd[1563]: 2025-07-15 05:13:35.629 [INFO][4516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-7ql6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" Jul 15 05:13:36.224932 containerd[1563]: 2025-07-15 05:13:35.629 [INFO][4516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0497d2c64cd ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-7ql6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" Jul 15 05:13:36.224932 containerd[1563]: 2025-07-15 05:13:35.632 [INFO][4516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-7ql6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" Jul 15 05:13:36.225000 containerd[1563]: 2025-07-15 05:13:35.633 [INFO][4516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-7ql6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0", GenerateName:"calico-apiserver-bc7876d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fcf869b-4236-4d45-bdd7-01e1ea38a06c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc7876d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7", Pod:"calico-apiserver-bc7876d79-7ql6g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0497d2c64cd", MAC:"52:5c:64:1a:4e:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:36.225051 containerd[1563]: 2025-07-15 05:13:36.215 [INFO][4516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-7ql6g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--7ql6g-eth0" Jul 15 05:13:36.230315 systemd-logind[1508]: New session 10 of user core. Jul 15 05:13:36.237556 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:13:36.394535 sshd[4561]: Connection closed by 10.0.0.1 port 53926 Jul 15 05:13:36.394965 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:36.400099 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:53926.service: Deactivated successfully. Jul 15 05:13:36.402751 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:13:36.403746 systemd-logind[1508]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:13:36.405113 systemd-logind[1508]: Removed session 10. Jul 15 05:13:37.051644 containerd[1563]: time="2025-07-15T05:13:37.051492481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2v6vx,Uid:b936ddb1-facc-4e6c-bca6-a08227d61e68,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:37.352461 containerd[1563]: time="2025-07-15T05:13:37.352213159Z" level=info msg="connecting to shim 4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7" address="unix:///run/containerd/s/a332aaa5d0342ca10ff5540f12eaa3bdd0423a92dbbdd665ae02da0aa6bd93ed" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:37.393868 systemd[1]: Started cri-containerd-4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7.scope - libcontainer container 4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7. Jul 15 05:13:37.415194 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:13:37.458988 containerd[1563]: time="2025-07-15T05:13:37.458936626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-7ql6g,Uid:1fcf869b-4236-4d45-bdd7-01e1ea38a06c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7\"" Jul 15 05:13:37.512792 systemd-networkd[1492]: calid7af1226b5b: Link UP Jul 15 05:13:37.514658 systemd-networkd[1492]: calid7af1226b5b: Gained carrier Jul 15 05:13:37.517540 systemd-networkd[1492]: cali0497d2c64cd: Gained IPv6LL Jul 15 05:13:37.547440 containerd[1563]: 2025-07-15 05:13:37.397 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2v6vx-eth0 csi-node-driver- calico-system b936ddb1-facc-4e6c-bca6-a08227d61e68 707 0 2025-07-15 05:12:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2v6vx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid7af1226b5b [] [] }} ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Namespace="calico-system" Pod="csi-node-driver-2v6vx" WorkloadEndpoint="localhost-k8s-csi--node--driver--2v6vx-" Jul 15 05:13:37.547440 containerd[1563]: 2025-07-15 05:13:37.397 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Namespace="calico-system" Pod="csi-node-driver-2v6vx" WorkloadEndpoint="localhost-k8s-csi--node--driver--2v6vx-eth0" Jul 15 05:13:37.547440 containerd[1563]: 2025-07-15 05:13:37.460 [INFO][4632] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" HandleID="k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Workload="localhost-k8s-csi--node--driver--2v6vx-eth0" Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.460 [INFO][4632] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" HandleID="k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Workload="localhost-k8s-csi--node--driver--2v6vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034ec80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2v6vx", "timestamp":"2025-07-15 05:13:37.460237224 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.460 [INFO][4632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.460 [INFO][4632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.460 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.471 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" host="localhost" Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.478 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.484 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.486 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.489 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:37.547795 containerd[1563]: 2025-07-15 05:13:37.489 [INFO][4632] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" host="localhost" Jul 15 05:13:37.548108 containerd[1563]: 2025-07-15 05:13:37.491 [INFO][4632] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06 Jul 15 05:13:37.548108 containerd[1563]: 2025-07-15 05:13:37.496 [INFO][4632] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" host="localhost" Jul 15 05:13:37.548108 containerd[1563]: 2025-07-15 05:13:37.503 [INFO][4632] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" host="localhost" Jul 15 05:13:37.548108 containerd[1563]: 2025-07-15 05:13:37.503 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" host="localhost" Jul 15 05:13:37.548108 containerd[1563]: 2025-07-15 05:13:37.503 [INFO][4632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:13:37.548108 containerd[1563]: 2025-07-15 05:13:37.503 [INFO][4632] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" HandleID="k8s-pod-network.98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Workload="localhost-k8s-csi--node--driver--2v6vx-eth0" Jul 15 05:13:37.548276 containerd[1563]: 2025-07-15 05:13:37.508 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Namespace="calico-system" Pod="csi-node-driver-2v6vx" WorkloadEndpoint="localhost-k8s-csi--node--driver--2v6vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2v6vx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b936ddb1-facc-4e6c-bca6-a08227d61e68", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2v6vx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7af1226b5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:37.548353 containerd[1563]: 2025-07-15 05:13:37.508 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Namespace="calico-system" Pod="csi-node-driver-2v6vx" WorkloadEndpoint="localhost-k8s-csi--node--driver--2v6vx-eth0" Jul 15 05:13:37.548353 containerd[1563]: 2025-07-15 05:13:37.508 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7af1226b5b ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Namespace="calico-system" Pod="csi-node-driver-2v6vx" WorkloadEndpoint="localhost-k8s-csi--node--driver--2v6vx-eth0" Jul 15 05:13:37.548353 containerd[1563]: 2025-07-15 05:13:37.518 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Namespace="calico-system" Pod="csi-node-driver-2v6vx" WorkloadEndpoint="localhost-k8s-csi--node--driver--2v6vx-eth0" Jul 15 05:13:37.548533 containerd[1563]: 2025-07-15 05:13:37.528 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Namespace="calico-system" Pod="csi-node-driver-2v6vx" WorkloadEndpoint="localhost-k8s-csi--node--driver--2v6vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2v6vx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b936ddb1-facc-4e6c-bca6-a08227d61e68", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06", Pod:"csi-node-driver-2v6vx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7af1226b5b", MAC:"5a:e5:08:d9:7b:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:37.548618 containerd[1563]: 2025-07-15 05:13:37.542 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" Namespace="calico-system" Pod="csi-node-driver-2v6vx" WorkloadEndpoint="localhost-k8s-csi--node--driver--2v6vx-eth0" Jul 15 05:13:37.597170 containerd[1563]: time="2025-07-15T05:13:37.597112555Z" level=info msg="connecting to shim 98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06" address="unix:///run/containerd/s/d1b959e59a46f211cbb58d9edc0622b6cd47d36557fb66444992350dec3ffbba" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:37.636996 systemd[1]: Started cri-containerd-98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06.scope - libcontainer container 98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06. Jul 15 05:13:37.655928 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:13:37.702327 containerd[1563]: time="2025-07-15T05:13:37.702169802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2v6vx,Uid:b936ddb1-facc-4e6c-bca6-a08227d61e68,Namespace:calico-system,Attempt:0,} returns sandbox id \"98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06\"" Jul 15 05:13:37.892854 containerd[1563]: time="2025-07-15T05:13:37.892663095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:37.893856 containerd[1563]: time="2025-07-15T05:13:37.893795359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 05:13:37.895462 containerd[1563]: time="2025-07-15T05:13:37.895379221Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:37.898348 containerd[1563]: time="2025-07-15T05:13:37.898271165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:37.899113 containerd[1563]: time="2025-07-15T05:13:37.899060230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.225259977s" Jul 15 05:13:37.899113 containerd[1563]: time="2025-07-15T05:13:37.899099496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 05:13:37.902924 containerd[1563]: time="2025-07-15T05:13:37.902880376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:13:37.908084 containerd[1563]: time="2025-07-15T05:13:37.908029044Z" level=info msg="CreateContainer within sandbox \"53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 05:13:37.918114 containerd[1563]: time="2025-07-15T05:13:37.918030852Z" level=info msg="Container f1e4546d5a2e93f5ea8f3e1ef8ab10add43442d9d1487f98e2db632b8eb4cbae: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:37.929370 containerd[1563]: time="2025-07-15T05:13:37.929302159Z" level=info msg="CreateContainer within sandbox \"53f0a7baa4ad465204311645e1f1fb521bbf959b5651b15350913b394f49569e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f1e4546d5a2e93f5ea8f3e1ef8ab10add43442d9d1487f98e2db632b8eb4cbae\"" Jul 15 05:13:37.929989 containerd[1563]: time="2025-07-15T05:13:37.929957838Z" level=info msg="StartContainer for \"f1e4546d5a2e93f5ea8f3e1ef8ab10add43442d9d1487f98e2db632b8eb4cbae\"" Jul 15 05:13:37.931230 containerd[1563]: time="2025-07-15T05:13:37.931200875Z" level=info msg="connecting to shim f1e4546d5a2e93f5ea8f3e1ef8ab10add43442d9d1487f98e2db632b8eb4cbae" address="unix:///run/containerd/s/05509c0ccf3d06c99c8f24e38c4496fd6c0281b27d241e0b43c7960505f2004a" protocol=ttrpc version=3 Jul 15 05:13:37.959746 systemd[1]: Started cri-containerd-f1e4546d5a2e93f5ea8f3e1ef8ab10add43442d9d1487f98e2db632b8eb4cbae.scope - libcontainer container f1e4546d5a2e93f5ea8f3e1ef8ab10add43442d9d1487f98e2db632b8eb4cbae. Jul 15 05:13:38.030274 containerd[1563]: time="2025-07-15T05:13:38.030134667Z" level=info msg="StartContainer for \"f1e4546d5a2e93f5ea8f3e1ef8ab10add43442d9d1487f98e2db632b8eb4cbae\" returns successfully" Jul 15 05:13:38.053296 containerd[1563]: time="2025-07-15T05:13:38.052904128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8smjf,Uid:fec24b87-b14c-4e24-8756-b676ee924b20,Namespace:kube-system,Attempt:0,}" Jul 15 05:13:38.053913 containerd[1563]: time="2025-07-15T05:13:38.053867036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fbb49d58-4nnjz,Uid:a7a25b3b-0fb7-4bf7-8050-6cc5c3635878,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:38.201607 systemd-networkd[1492]: caliece59fca6aa: Link UP Jul 15 05:13:38.207090 systemd-networkd[1492]: caliece59fca6aa: Gained carrier Jul 15 05:13:38.242287 containerd[1563]: 2025-07-15 05:13:38.105 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0 calico-kube-controllers-9fbb49d58- calico-system a7a25b3b-0fb7-4bf7-8050-6cc5c3635878 828 0 2025-07-15 05:12:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9fbb49d58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9fbb49d58-4nnjz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliece59fca6aa [] [] }} ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Namespace="calico-system" Pod="calico-kube-controllers-9fbb49d58-4nnjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-" Jul 15 05:13:38.242287 containerd[1563]: 2025-07-15 05:13:38.105 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Namespace="calico-system" Pod="calico-kube-controllers-9fbb49d58-4nnjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" Jul 15 05:13:38.242287 containerd[1563]: 2025-07-15 05:13:38.140 [INFO][4763] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" HandleID="k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Workload="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.140 [INFO][4763] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" HandleID="k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Workload="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001357a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9fbb49d58-4nnjz", "timestamp":"2025-07-15 05:13:38.140348777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.140 [INFO][4763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.140 [INFO][4763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.140 [INFO][4763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.149 [INFO][4763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" host="localhost" Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.155 [INFO][4763] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.160 [INFO][4763] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.162 [INFO][4763] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.164 [INFO][4763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:38.242539 containerd[1563]: 2025-07-15 05:13:38.164 [INFO][4763] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" host="localhost" Jul 15 05:13:38.242760 containerd[1563]: 2025-07-15 05:13:38.166 [INFO][4763] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3 Jul 15 05:13:38.242760 containerd[1563]: 2025-07-15 05:13:38.172 [INFO][4763] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" host="localhost" Jul 15 05:13:38.242760 containerd[1563]: 2025-07-15 05:13:38.185 [INFO][4763] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" host="localhost" Jul 15 05:13:38.242760 containerd[1563]: 2025-07-15 05:13:38.185 [INFO][4763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" host="localhost" Jul 15 05:13:38.242760 containerd[1563]: 2025-07-15 05:13:38.185 [INFO][4763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:13:38.242760 containerd[1563]: 2025-07-15 05:13:38.185 [INFO][4763] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" HandleID="k8s-pod-network.e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Workload="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" Jul 15 05:13:38.242897 containerd[1563]: 2025-07-15 05:13:38.191 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Namespace="calico-system" Pod="calico-kube-controllers-9fbb49d58-4nnjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0", GenerateName:"calico-kube-controllers-9fbb49d58-", Namespace:"calico-system", SelfLink:"", UID:"a7a25b3b-0fb7-4bf7-8050-6cc5c3635878", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fbb49d58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9fbb49d58-4nnjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliece59fca6aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:38.242958 containerd[1563]: 2025-07-15 05:13:38.191 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Namespace="calico-system" Pod="calico-kube-controllers-9fbb49d58-4nnjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" Jul 15 05:13:38.242958 containerd[1563]: 2025-07-15 05:13:38.191 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliece59fca6aa ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Namespace="calico-system" Pod="calico-kube-controllers-9fbb49d58-4nnjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" Jul 15 05:13:38.242958 containerd[1563]: 2025-07-15 05:13:38.209 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Namespace="calico-system" Pod="calico-kube-controllers-9fbb49d58-4nnjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" Jul 15 05:13:38.243128 containerd[1563]: 2025-07-15 05:13:38.218 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Namespace="calico-system" Pod="calico-kube-controllers-9fbb49d58-4nnjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0", GenerateName:"calico-kube-controllers-9fbb49d58-", Namespace:"calico-system", SelfLink:"", UID:"a7a25b3b-0fb7-4bf7-8050-6cc5c3635878", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fbb49d58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3", Pod:"calico-kube-controllers-9fbb49d58-4nnjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliece59fca6aa", MAC:"ba:34:3e:02:ce:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:38.243192 containerd[1563]: 2025-07-15 05:13:38.237 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" Namespace="calico-system" Pod="calico-kube-controllers-9fbb49d58-4nnjz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fbb49d58--4nnjz-eth0" Jul 15 05:13:38.272681 containerd[1563]: time="2025-07-15T05:13:38.272628496Z" level=info msg="connecting to shim e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3" address="unix:///run/containerd/s/6aa7b75dc58a791ab18735d4224b5cb4e64389e2a9af980fdea546783106c989" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:38.305667 systemd[1]: Started cri-containerd-e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3.scope - libcontainer container e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3. Jul 15 05:13:38.321213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235167329.mount: Deactivated successfully. Jul 15 05:13:38.327631 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:13:38.528776 systemd-networkd[1492]: cali4f7eb898a9a: Link UP Jul 15 05:13:38.529229 systemd-networkd[1492]: cali4f7eb898a9a: Gained carrier Jul 15 05:13:38.532485 containerd[1563]: time="2025-07-15T05:13:38.532439401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fbb49d58-4nnjz,Uid:a7a25b3b-0fb7-4bf7-8050-6cc5c3635878,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3\"" Jul 15 05:13:38.548329 containerd[1563]: 2025-07-15 05:13:38.106 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8smjf-eth0 coredns-668d6bf9bc- kube-system fec24b87-b14c-4e24-8756-b676ee924b20 829 0 2025-07-15 05:12:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8smjf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f7eb898a9a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Namespace="kube-system" Pod="coredns-668d6bf9bc-8smjf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8smjf-" Jul 15 05:13:38.548329 containerd[1563]: 2025-07-15 05:13:38.107 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Namespace="kube-system" Pod="coredns-668d6bf9bc-8smjf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" Jul 15 05:13:38.548329 containerd[1563]: 2025-07-15 05:13:38.147 [INFO][4769] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" HandleID="k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Workload="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.147 [INFO][4769] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" HandleID="k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Workload="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d6b60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8smjf", "timestamp":"2025-07-15 05:13:38.147069836 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.147 [INFO][4769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.185 [INFO][4769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.185 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.251 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" host="localhost" Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.255 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.261 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.263 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.265 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:38.548573 containerd[1563]: 2025-07-15 05:13:38.265 [INFO][4769] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" host="localhost" Jul 15 05:13:38.548870 containerd[1563]: 2025-07-15 05:13:38.266 [INFO][4769] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02 Jul 15 05:13:38.548870 containerd[1563]: 2025-07-15 05:13:38.464 [INFO][4769] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" host="localhost" Jul 15 05:13:38.548870 containerd[1563]: 2025-07-15 05:13:38.520 [INFO][4769] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" host="localhost" Jul 15 05:13:38.548870 containerd[1563]: 2025-07-15 05:13:38.520 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" host="localhost" Jul 15 05:13:38.548870 containerd[1563]: 2025-07-15 05:13:38.520 [INFO][4769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:13:38.548870 containerd[1563]: 2025-07-15 05:13:38.520 [INFO][4769] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" HandleID="k8s-pod-network.d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Workload="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" Jul 15 05:13:38.549023 containerd[1563]: 2025-07-15 05:13:38.525 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Namespace="kube-system" Pod="coredns-668d6bf9bc-8smjf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8smjf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fec24b87-b14c-4e24-8756-b676ee924b20", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8smjf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f7eb898a9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:38.549133 containerd[1563]: 2025-07-15 05:13:38.525 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Namespace="kube-system" Pod="coredns-668d6bf9bc-8smjf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" Jul 15 05:13:38.549133 containerd[1563]: 2025-07-15 05:13:38.525 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f7eb898a9a ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Namespace="kube-system" Pod="coredns-668d6bf9bc-8smjf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" Jul 15 05:13:38.549133 containerd[1563]: 2025-07-15 05:13:38.529 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Namespace="kube-system" Pod="coredns-668d6bf9bc-8smjf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" Jul 15 05:13:38.549235 containerd[1563]: 2025-07-15 05:13:38.533 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Namespace="kube-system" Pod="coredns-668d6bf9bc-8smjf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8smjf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fec24b87-b14c-4e24-8756-b676ee924b20", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02", Pod:"coredns-668d6bf9bc-8smjf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f7eb898a9a", MAC:"4e:5e:e5:0d:5e:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:38.549235 containerd[1563]: 2025-07-15 05:13:38.543 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" Namespace="kube-system" Pod="coredns-668d6bf9bc-8smjf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8smjf-eth0" Jul 15 05:13:38.576515 containerd[1563]: time="2025-07-15T05:13:38.576459697Z" level=info msg="connecting to shim d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02" address="unix:///run/containerd/s/783363470dadd3f9d480dadf4ca372133784d75e7412422e583bd28047ccfab5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:38.616671 systemd[1]: Started cri-containerd-d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02.scope - libcontainer container d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02. Jul 15 05:13:38.632658 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:13:38.668182 containerd[1563]: time="2025-07-15T05:13:38.668117724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8smjf,Uid:fec24b87-b14c-4e24-8756-b676ee924b20,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02\"" Jul 15 05:13:38.671451 containerd[1563]: time="2025-07-15T05:13:38.670844648Z" level=info msg="CreateContainer within sandbox \"d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:13:38.685468 containerd[1563]: time="2025-07-15T05:13:38.685423337Z" level=info msg="Container 845f7b7be7533cb98622394fcd901392a61ca47842c21d92b5e62ee1e18bbf87: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:38.690234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488120251.mount: Deactivated successfully. Jul 15 05:13:38.717996 containerd[1563]: time="2025-07-15T05:13:38.717942691Z" level=info msg="CreateContainer within sandbox \"d29ff4fbe4473ce76222dfe3a6749993c2ea18b965532ac63b06a2710fd19a02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"845f7b7be7533cb98622394fcd901392a61ca47842c21d92b5e62ee1e18bbf87\"" Jul 15 05:13:38.719273 containerd[1563]: time="2025-07-15T05:13:38.719249138Z" level=info msg="StartContainer for \"845f7b7be7533cb98622394fcd901392a61ca47842c21d92b5e62ee1e18bbf87\"" Jul 15 05:13:38.720534 containerd[1563]: time="2025-07-15T05:13:38.720509818Z" level=info msg="connecting to shim 845f7b7be7533cb98622394fcd901392a61ca47842c21d92b5e62ee1e18bbf87" address="unix:///run/containerd/s/783363470dadd3f9d480dadf4ca372133784d75e7412422e583bd28047ccfab5" protocol=ttrpc version=3 Jul 15 05:13:38.745614 systemd[1]: Started cri-containerd-845f7b7be7533cb98622394fcd901392a61ca47842c21d92b5e62ee1e18bbf87.scope - libcontainer container 845f7b7be7533cb98622394fcd901392a61ca47842c21d92b5e62ee1e18bbf87. Jul 15 05:13:38.781077 containerd[1563]: time="2025-07-15T05:13:38.780973864Z" level=info msg="StartContainer for \"845f7b7be7533cb98622394fcd901392a61ca47842c21d92b5e62ee1e18bbf87\" returns successfully" Jul 15 05:13:38.861635 systemd-networkd[1492]: calid7af1226b5b: Gained IPv6LL Jul 15 05:13:39.052541 containerd[1563]: time="2025-07-15T05:13:39.052322995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-h8t8t,Uid:ff83fb40-4d11-48a6-9cd9-48b99ded8970,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:13:39.226529 systemd-networkd[1492]: cali5d8d1a155c1: Link UP Jul 15 05:13:39.227574 systemd-networkd[1492]: cali5d8d1a155c1: Gained carrier Jul 15 05:13:39.240433 kubelet[2713]: I0715 05:13:39.240053 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b874f976b-2kbvl" podStartSLOduration=3.036470729 podStartE2EDuration="11.240030039s" podCreationTimestamp="2025-07-15 05:13:28 +0000 UTC" firstStartedPulling="2025-07-15 05:13:29.699056579 +0000 UTC m=+66.741520627" lastFinishedPulling="2025-07-15 05:13:37.902615889 +0000 UTC m=+74.945079937" observedRunningTime="2025-07-15 05:13:38.713225517 +0000 UTC m=+75.755689565" watchObservedRunningTime="2025-07-15 05:13:39.240030039 +0000 UTC m=+76.282494087" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.153 [INFO][4935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0 calico-apiserver-bc7876d79- calico-apiserver ff83fb40-4d11-48a6-9cd9-48b99ded8970 827 0 2025-07-15 05:12:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bc7876d79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bc7876d79-h8t8t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d8d1a155c1 [] [] }} ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-h8t8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.153 [INFO][4935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-h8t8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.185 [INFO][4944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" HandleID="k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Workload="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.185 [INFO][4944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" HandleID="k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Workload="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00058c130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bc7876d79-h8t8t", "timestamp":"2025-07-15 05:13:39.18569641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.185 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.186 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.186 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.193 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.198 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.202 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.204 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.207 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.207 [INFO][4944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.208 [INFO][4944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.212 [INFO][4944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.219 [INFO][4944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.220 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" host="localhost" Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.220 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:13:39.246076 containerd[1563]: 2025-07-15 05:13:39.220 [INFO][4944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" HandleID="k8s-pod-network.6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Workload="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" Jul 15 05:13:39.246896 containerd[1563]: 2025-07-15 05:13:39.223 [INFO][4935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-h8t8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0", GenerateName:"calico-apiserver-bc7876d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff83fb40-4d11-48a6-9cd9-48b99ded8970", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc7876d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bc7876d79-h8t8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d8d1a155c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:39.246896 containerd[1563]: 2025-07-15 05:13:39.224 [INFO][4935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-h8t8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" Jul 15 05:13:39.246896 containerd[1563]: 2025-07-15 05:13:39.224 [INFO][4935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d8d1a155c1 ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-h8t8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" Jul 15 05:13:39.246896 containerd[1563]: 2025-07-15 05:13:39.226 [INFO][4935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-h8t8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" Jul 15 05:13:39.246896 containerd[1563]: 2025-07-15 05:13:39.226 [INFO][4935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-h8t8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0", GenerateName:"calico-apiserver-bc7876d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff83fb40-4d11-48a6-9cd9-48b99ded8970", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc7876d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc", Pod:"calico-apiserver-bc7876d79-h8t8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d8d1a155c1", MAC:"56:1a:a4:48:ed:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:39.246896 containerd[1563]: 2025-07-15 05:13:39.239 [INFO][4935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" Namespace="calico-apiserver" Pod="calico-apiserver-bc7876d79-h8t8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--bc7876d79--h8t8t-eth0" Jul 15 05:13:39.334463 containerd[1563]: time="2025-07-15T05:13:39.334279360Z" level=info msg="connecting to shim 6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc" address="unix:///run/containerd/s/6201d621fa1801c62fe5ccce1984d31d4fac059e116c47e090c91d15a5fe8d35" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:39.367677 systemd[1]: Started cri-containerd-6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc.scope - libcontainer container 6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc. Jul 15 05:13:39.387294 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:13:39.438794 containerd[1563]: time="2025-07-15T05:13:39.438716371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc7876d79-h8t8t,Uid:ff83fb40-4d11-48a6-9cd9-48b99ded8970,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc\"" Jul 15 05:13:39.803195 kubelet[2713]: I0715 05:13:39.803044 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8smjf" podStartSLOduration=69.80302168 podStartE2EDuration="1m9.80302168s" podCreationTimestamp="2025-07-15 05:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:13:39.778063878 +0000 UTC m=+76.820527936" watchObservedRunningTime="2025-07-15 05:13:39.80302168 +0000 UTC m=+76.845485718" Jul 15 05:13:39.950761 systemd-networkd[1492]: caliece59fca6aa: Gained IPv6LL Jul 15 05:13:40.052065 containerd[1563]: time="2025-07-15T05:13:40.052015752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-phzb2,Uid:5b96ba32-0771-4e06-b517-802d100be4c8,Namespace:calico-system,Attempt:0,}" Jul 15 05:13:40.053015 containerd[1563]: time="2025-07-15T05:13:40.052434755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfv2c,Uid:b92442d1-6355-4dea-8fde-9117f56f0339,Namespace:kube-system,Attempt:0,}" Jul 15 05:13:40.334788 systemd-networkd[1492]: cali5d8d1a155c1: Gained IPv6LL Jul 15 05:13:40.363892 containerd[1563]: time="2025-07-15T05:13:40.363805814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:40.366713 containerd[1563]: time="2025-07-15T05:13:40.366665917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 05:13:40.369110 containerd[1563]: time="2025-07-15T05:13:40.369042643Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:40.373454 containerd[1563]: time="2025-07-15T05:13:40.373358006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:40.375111 containerd[1563]: time="2025-07-15T05:13:40.375076480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.472163752s" Jul 15 05:13:40.375185 containerd[1563]: time="2025-07-15T05:13:40.375116657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:13:40.379028 containerd[1563]: time="2025-07-15T05:13:40.378681852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 05:13:40.382138 containerd[1563]: time="2025-07-15T05:13:40.382063184Z" level=info msg="CreateContainer within sandbox \"4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:13:40.403126 containerd[1563]: time="2025-07-15T05:13:40.401360957Z" level=info msg="Container 2f77361b230aef2cf769d1ccf46483b3cf16d3ff14b6cf0e5be21b8bfceba68f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:40.425767 containerd[1563]: time="2025-07-15T05:13:40.425605003Z" level=info msg="CreateContainer within sandbox \"4e00bf09d0281b97bd00f78a3d0f8fbc24b74283d5071b334b47cd70c2d313d7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f77361b230aef2cf769d1ccf46483b3cf16d3ff14b6cf0e5be21b8bfceba68f\"" Jul 15 05:13:40.427383 containerd[1563]: time="2025-07-15T05:13:40.426861662Z" level=info msg="StartContainer for \"2f77361b230aef2cf769d1ccf46483b3cf16d3ff14b6cf0e5be21b8bfceba68f\"" Jul 15 05:13:40.429198 containerd[1563]: time="2025-07-15T05:13:40.429162503Z" level=info msg="connecting to shim 2f77361b230aef2cf769d1ccf46483b3cf16d3ff14b6cf0e5be21b8bfceba68f" address="unix:///run/containerd/s/a332aaa5d0342ca10ff5540f12eaa3bdd0423a92dbbdd665ae02da0aa6bd93ed" protocol=ttrpc version=3 Jul 15 05:13:40.462997 systemd-networkd[1492]: cali4f7eb898a9a: Gained IPv6LL Jul 15 05:13:40.469742 systemd[1]: Started cri-containerd-2f77361b230aef2cf769d1ccf46483b3cf16d3ff14b6cf0e5be21b8bfceba68f.scope - libcontainer container 2f77361b230aef2cf769d1ccf46483b3cf16d3ff14b6cf0e5be21b8bfceba68f. Jul 15 05:13:40.496981 systemd-networkd[1492]: califc46d8536bf: Link UP Jul 15 05:13:40.498854 systemd-networkd[1492]: califc46d8536bf: Gained carrier Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.365 [INFO][5016] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0 coredns-668d6bf9bc- kube-system b92442d1-6355-4dea-8fde-9117f56f0339 816 0 2025-07-15 05:12:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xfv2c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc46d8536bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfv2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xfv2c-" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.365 [INFO][5016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfv2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.418 [INFO][5051] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" HandleID="k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Workload="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.418 [INFO][5051] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" HandleID="k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Workload="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xfv2c", "timestamp":"2025-07-15 05:13:40.418267497 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.418 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.419 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.419 [INFO][5051] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.436 [INFO][5051] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.451 [INFO][5051] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.458 [INFO][5051] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.463 [INFO][5051] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.467 [INFO][5051] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.467 [INFO][5051] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.471 [INFO][5051] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393 Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.476 [INFO][5051] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.487 [INFO][5051] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.487 [INFO][5051] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" host="localhost" Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.487 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:13:40.516401 containerd[1563]: 2025-07-15 05:13:40.487 [INFO][5051] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" HandleID="k8s-pod-network.66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Workload="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" Jul 15 05:13:40.517189 containerd[1563]: 2025-07-15 05:13:40.492 [INFO][5016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfv2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b92442d1-6355-4dea-8fde-9117f56f0339", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xfv2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc46d8536bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:40.517189 containerd[1563]: 2025-07-15 05:13:40.492 [INFO][5016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfv2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" Jul 15 05:13:40.517189 containerd[1563]: 2025-07-15 05:13:40.492 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc46d8536bf ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfv2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" Jul 15 05:13:40.517189 containerd[1563]: 2025-07-15 05:13:40.500 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfv2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" Jul 15 05:13:40.517189 containerd[1563]: 2025-07-15 05:13:40.501 [INFO][5016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfv2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b92442d1-6355-4dea-8fde-9117f56f0339", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393", Pod:"coredns-668d6bf9bc-xfv2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc46d8536bf", MAC:"e6:cb:43:02:67:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:40.517189 containerd[1563]: 2025-07-15 05:13:40.513 [INFO][5016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" Namespace="kube-system" Pod="coredns-668d6bf9bc-xfv2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xfv2c-eth0" Jul 15 05:13:40.561725 containerd[1563]: time="2025-07-15T05:13:40.561660081Z" level=info msg="StartContainer for \"2f77361b230aef2cf769d1ccf46483b3cf16d3ff14b6cf0e5be21b8bfceba68f\" returns successfully" Jul 15 05:13:40.629570 containerd[1563]: time="2025-07-15T05:13:40.629518596Z" level=info msg="connecting to shim 66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393" address="unix:///run/containerd/s/7eff5b0e130e838907d41982fc25249d391fd9085cf860f979d43ebee26bc87b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:40.667604 systemd[1]: Started cri-containerd-66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393.scope - libcontainer container 66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393. Jul 15 05:13:40.684971 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:13:40.781693 containerd[1563]: time="2025-07-15T05:13:40.781550352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfv2c,Uid:b92442d1-6355-4dea-8fde-9117f56f0339,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393\"" Jul 15 05:13:40.786858 containerd[1563]: time="2025-07-15T05:13:40.786738086Z" level=info msg="CreateContainer within sandbox \"66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:13:40.800767 kubelet[2713]: I0715 05:13:40.800665 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bc7876d79-7ql6g" podStartSLOduration=56.883944552 podStartE2EDuration="59.800640848s" podCreationTimestamp="2025-07-15 05:12:41 +0000 UTC" firstStartedPulling="2025-07-15 05:13:37.460207908 +0000 UTC m=+74.502671956" lastFinishedPulling="2025-07-15 05:13:40.376904184 +0000 UTC m=+77.419368252" observedRunningTime="2025-07-15 05:13:40.800400628 +0000 UTC m=+77.842864686" watchObservedRunningTime="2025-07-15 05:13:40.800640848 +0000 UTC m=+77.843104896" Jul 15 05:13:40.801986 containerd[1563]: time="2025-07-15T05:13:40.801922204Z" level=info msg="Container 625feb50b652327186eaedab388b7e3987f0ba7054bd4db7fe641899c7bde7c4: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:40.812669 systemd-networkd[1492]: cali6e22d5e1641: Link UP Jul 15 05:13:40.813380 systemd-networkd[1492]: cali6e22d5e1641: Gained carrier Jul 15 05:13:40.817950 containerd[1563]: time="2025-07-15T05:13:40.817885887Z" level=info msg="CreateContainer within sandbox \"66ee3dbae95f10616d7c3d206467067b5fb9989e054171011f16bc35f0538393\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"625feb50b652327186eaedab388b7e3987f0ba7054bd4db7fe641899c7bde7c4\"" Jul 15 05:13:40.822665 containerd[1563]: time="2025-07-15T05:13:40.822527525Z" level=info msg="StartContainer for \"625feb50b652327186eaedab388b7e3987f0ba7054bd4db7fe641899c7bde7c4\"" Jul 15 05:13:40.826340 containerd[1563]: time="2025-07-15T05:13:40.826262264Z" level=info msg="connecting to shim 625feb50b652327186eaedab388b7e3987f0ba7054bd4db7fe641899c7bde7c4" address="unix:///run/containerd/s/7eff5b0e130e838907d41982fc25249d391fd9085cf860f979d43ebee26bc87b" protocol=ttrpc version=3 Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.378 [INFO][5028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--phzb2-eth0 goldmane-768f4c5c69- calico-system 5b96ba32-0771-4e06-b517-802d100be4c8 820 0 2025-07-15 05:12:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-phzb2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6e22d5e1641 [] [] }} ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Namespace="calico-system" Pod="goldmane-768f4c5c69-phzb2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--phzb2-" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.379 [INFO][5028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Namespace="calico-system" Pod="goldmane-768f4c5c69-phzb2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.435 [INFO][5057] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" HandleID="k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Workload="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.435 [INFO][5057] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" HandleID="k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Workload="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1f30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-phzb2", "timestamp":"2025-07-15 05:13:40.435069566 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.435 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.487 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.487 [INFO][5057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.540 [INFO][5057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.552 [INFO][5057] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.560 [INFO][5057] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.564 [INFO][5057] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.568 [INFO][5057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.568 [INFO][5057] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.570 [INFO][5057] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4 Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.784 [INFO][5057] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.795 [INFO][5057] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.795 [INFO][5057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" host="localhost" Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.796 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:13:40.841943 containerd[1563]: 2025-07-15 05:13:40.796 [INFO][5057] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" HandleID="k8s-pod-network.24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Workload="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" Jul 15 05:13:40.842578 containerd[1563]: 2025-07-15 05:13:40.805 [INFO][5028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Namespace="calico-system" Pod="goldmane-768f4c5c69-phzb2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--phzb2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5b96ba32-0771-4e06-b517-802d100be4c8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-phzb2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e22d5e1641", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:40.842578 containerd[1563]: 2025-07-15 05:13:40.805 [INFO][5028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Namespace="calico-system" Pod="goldmane-768f4c5c69-phzb2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" Jul 15 05:13:40.842578 containerd[1563]: 2025-07-15 05:13:40.805 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e22d5e1641 ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Namespace="calico-system" Pod="goldmane-768f4c5c69-phzb2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" Jul 15 05:13:40.842578 containerd[1563]: 2025-07-15 05:13:40.815 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Namespace="calico-system" Pod="goldmane-768f4c5c69-phzb2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" Jul 15 05:13:40.842578 containerd[1563]: 2025-07-15 05:13:40.816 [INFO][5028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Namespace="calico-system" Pod="goldmane-768f4c5c69-phzb2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--phzb2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5b96ba32-0771-4e06-b517-802d100be4c8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4", Pod:"goldmane-768f4c5c69-phzb2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e22d5e1641", MAC:"8a:ab:4c:1e:26:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:13:40.842578 containerd[1563]: 2025-07-15 05:13:40.834 [INFO][5028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" Namespace="calico-system" Pod="goldmane-768f4c5c69-phzb2" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--phzb2-eth0" Jul 15 05:13:40.871020 systemd[1]: Started cri-containerd-625feb50b652327186eaedab388b7e3987f0ba7054bd4db7fe641899c7bde7c4.scope - libcontainer container 625feb50b652327186eaedab388b7e3987f0ba7054bd4db7fe641899c7bde7c4. Jul 15 05:13:40.886455 containerd[1563]: time="2025-07-15T05:13:40.885488151Z" level=info msg="connecting to shim 24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4" address="unix:///run/containerd/s/3dd2f47a971e2698297433553946e0043ee8def46434033a22ba7d891917bd2a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:40.925804 systemd[1]: Started cri-containerd-24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4.scope - libcontainer container 24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4. Jul 15 05:13:40.934016 containerd[1563]: time="2025-07-15T05:13:40.933952226Z" level=info msg="StartContainer for \"625feb50b652327186eaedab388b7e3987f0ba7054bd4db7fe641899c7bde7c4\" returns successfully" Jul 15 05:13:40.956754 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:13:40.995773 containerd[1563]: time="2025-07-15T05:13:40.995727690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-phzb2,Uid:5b96ba32-0771-4e06-b517-802d100be4c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4\"" Jul 15 05:13:41.409498 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:52354.service - OpenSSH per-connection server daemon (10.0.0.1:52354). Jul 15 05:13:41.479971 sshd[5260]: Accepted publickey for core from 10.0.0.1 port 52354 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:41.481951 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:41.487352 systemd-logind[1508]: New session 11 of user core. Jul 15 05:13:41.494610 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:13:41.869596 systemd-networkd[1492]: califc46d8536bf: Gained IPv6LL Jul 15 05:13:42.210796 kubelet[2713]: I0715 05:13:42.210372 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xfv2c" podStartSLOduration=72.210349008 podStartE2EDuration="1m12.210349008s" podCreationTimestamp="2025-07-15 05:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:13:42.209746284 +0000 UTC m=+79.252210352" watchObservedRunningTime="2025-07-15 05:13:42.210349008 +0000 UTC m=+79.252813056" Jul 15 05:13:42.221801 sshd[5263]: Connection closed by 10.0.0.1 port 52354 Jul 15 05:13:42.222184 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:42.226239 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:52354.service: Deactivated successfully. Jul 15 05:13:42.228707 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:13:42.230471 systemd-logind[1508]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:13:42.232182 systemd-logind[1508]: Removed session 11. Jul 15 05:13:42.254047 systemd-networkd[1492]: cali6e22d5e1641: Gained IPv6LL Jul 15 05:13:43.525723 containerd[1563]: time="2025-07-15T05:13:43.525659319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:43.527496 containerd[1563]: time="2025-07-15T05:13:43.527371906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 05:13:43.530499 containerd[1563]: time="2025-07-15T05:13:43.530313777Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:43.533347 containerd[1563]: time="2025-07-15T05:13:43.533287408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:43.534867 containerd[1563]: time="2025-07-15T05:13:43.534824359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.15609143s" Jul 15 05:13:43.534867 containerd[1563]: time="2025-07-15T05:13:43.534863805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 05:13:43.548591 containerd[1563]: time="2025-07-15T05:13:43.548528748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 05:13:43.550920 containerd[1563]: time="2025-07-15T05:13:43.550703460Z" level=info msg="CreateContainer within sandbox \"98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 05:13:43.575344 containerd[1563]: time="2025-07-15T05:13:43.574756965Z" level=info msg="Container 731e779d20a2ead1f4170b33d2dfcd7760ed3b9feb47f1ae3622ecb5bc2b74d8: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:43.588199 containerd[1563]: time="2025-07-15T05:13:43.588144226Z" level=info msg="CreateContainer within sandbox \"98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"731e779d20a2ead1f4170b33d2dfcd7760ed3b9feb47f1ae3622ecb5bc2b74d8\"" Jul 15 05:13:43.589615 containerd[1563]: time="2025-07-15T05:13:43.588903269Z" level=info msg="StartContainer for \"731e779d20a2ead1f4170b33d2dfcd7760ed3b9feb47f1ae3622ecb5bc2b74d8\"" Jul 15 05:13:43.592059 containerd[1563]: time="2025-07-15T05:13:43.591998854Z" level=info msg="connecting to shim 731e779d20a2ead1f4170b33d2dfcd7760ed3b9feb47f1ae3622ecb5bc2b74d8" address="unix:///run/containerd/s/d1b959e59a46f211cbb58d9edc0622b6cd47d36557fb66444992350dec3ffbba" protocol=ttrpc version=3 Jul 15 05:13:43.621700 systemd[1]: Started cri-containerd-731e779d20a2ead1f4170b33d2dfcd7760ed3b9feb47f1ae3622ecb5bc2b74d8.scope - libcontainer container 731e779d20a2ead1f4170b33d2dfcd7760ed3b9feb47f1ae3622ecb5bc2b74d8. Jul 15 05:13:43.679492 containerd[1563]: time="2025-07-15T05:13:43.679393536Z" level=info msg="StartContainer for \"731e779d20a2ead1f4170b33d2dfcd7760ed3b9feb47f1ae3622ecb5bc2b74d8\" returns successfully" Jul 15 05:13:47.246385 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:52358.service - OpenSSH per-connection server daemon (10.0.0.1:52358). Jul 15 05:13:47.312361 sshd[5324]: Accepted publickey for core from 10.0.0.1 port 52358 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:47.315244 sshd-session[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:47.321519 systemd-logind[1508]: New session 12 of user core. Jul 15 05:13:47.332629 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:13:47.494441 sshd[5331]: Connection closed by 10.0.0.1 port 52358 Jul 15 05:13:47.495166 sshd-session[5324]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:47.505924 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:52358.service: Deactivated successfully. Jul 15 05:13:47.509296 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:13:47.511602 systemd-logind[1508]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:13:47.514464 systemd-logind[1508]: Removed session 12. Jul 15 05:13:47.518701 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:52370.service - OpenSSH per-connection server daemon (10.0.0.1:52370). Jul 15 05:13:47.594624 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:47.596953 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:47.603306 systemd-logind[1508]: New session 13 of user core. Jul 15 05:13:47.613552 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:13:47.822242 sshd[5349]: Connection closed by 10.0.0.1 port 52370 Jul 15 05:13:47.823051 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:47.836072 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:52370.service: Deactivated successfully. Jul 15 05:13:47.838605 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:13:47.840948 systemd-logind[1508]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:13:47.849768 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:52386.service - OpenSSH per-connection server daemon (10.0.0.1:52386). Jul 15 05:13:47.853019 systemd-logind[1508]: Removed session 13. Jul 15 05:13:47.918502 sshd[5361]: Accepted publickey for core from 10.0.0.1 port 52386 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:47.920807 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:47.928188 systemd-logind[1508]: New session 14 of user core. Jul 15 05:13:47.936632 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:13:48.090336 sshd[5364]: Connection closed by 10.0.0.1 port 52386 Jul 15 05:13:48.091614 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:48.097314 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:52386.service: Deactivated successfully. Jul 15 05:13:48.099876 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:13:48.100939 systemd-logind[1508]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:13:48.102174 systemd-logind[1508]: Removed session 14. Jul 15 05:13:49.697971 containerd[1563]: time="2025-07-15T05:13:49.697858540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:49.762497 containerd[1563]: time="2025-07-15T05:13:49.750497267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 05:13:49.783603 containerd[1563]: time="2025-07-15T05:13:49.783565064Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:49.842248 containerd[1563]: time="2025-07-15T05:13:49.842190768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:49.843062 containerd[1563]: time="2025-07-15T05:13:49.843029678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 6.294451156s" Jul 15 05:13:49.843115 containerd[1563]: time="2025-07-15T05:13:49.843079303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 05:13:49.860362 containerd[1563]: time="2025-07-15T05:13:49.860305669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:13:49.871496 containerd[1563]: time="2025-07-15T05:13:49.871016941Z" level=info msg="CreateContainer within sandbox \"e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 05:13:50.229213 containerd[1563]: time="2025-07-15T05:13:50.228485431Z" level=info msg="Container 28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:50.259244 containerd[1563]: time="2025-07-15T05:13:50.259192529Z" level=info msg="CreateContainer within sandbox \"e7a138a93fc8cff19d4f3f25961dc88949bfd809b3ffbff91597265cb12789a3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0\"" Jul 15 05:13:50.260905 containerd[1563]: time="2025-07-15T05:13:50.260814914Z" level=info msg="StartContainer for \"28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0\"" Jul 15 05:13:50.263235 containerd[1563]: time="2025-07-15T05:13:50.263191067Z" level=info msg="connecting to shim 28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0" address="unix:///run/containerd/s/6aa7b75dc58a791ab18735d4224b5cb4e64389e2a9af980fdea546783106c989" protocol=ttrpc version=3 Jul 15 05:13:50.295683 systemd[1]: Started cri-containerd-28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0.scope - libcontainer container 28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0. Jul 15 05:13:50.366229 containerd[1563]: time="2025-07-15T05:13:50.366140325Z" level=info msg="StartContainer for \"28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0\" returns successfully" Jul 15 05:13:50.408443 containerd[1563]: time="2025-07-15T05:13:50.408359393Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:50.448502 containerd[1563]: time="2025-07-15T05:13:50.448241453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 05:13:50.458480 containerd[1563]: time="2025-07-15T05:13:50.458354315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 597.997338ms" Jul 15 05:13:50.458480 containerd[1563]: time="2025-07-15T05:13:50.458484222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:13:50.460077 containerd[1563]: time="2025-07-15T05:13:50.460041533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 05:13:50.462673 containerd[1563]: time="2025-07-15T05:13:50.462630913Z" level=info msg="CreateContainer within sandbox \"6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:13:50.607485 containerd[1563]: time="2025-07-15T05:13:50.607337578Z" level=info msg="Container 86c717b2abfa1bdde72382eb399aaf19d195d10144d417b5b5aaba8ab896e0f3: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:50.622050 containerd[1563]: time="2025-07-15T05:13:50.621972166Z" level=info msg="CreateContainer within sandbox \"6961d177bc214f1372b2522c898fd902af10fb94aa17c6bbfce558dc297616cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86c717b2abfa1bdde72382eb399aaf19d195d10144d417b5b5aaba8ab896e0f3\"" Jul 15 05:13:50.622909 containerd[1563]: time="2025-07-15T05:13:50.622854799Z" level=info msg="StartContainer for \"86c717b2abfa1bdde72382eb399aaf19d195d10144d417b5b5aaba8ab896e0f3\"" Jul 15 05:13:50.625089 containerd[1563]: time="2025-07-15T05:13:50.625036221Z" level=info msg="connecting to shim 86c717b2abfa1bdde72382eb399aaf19d195d10144d417b5b5aaba8ab896e0f3" address="unix:///run/containerd/s/6201d621fa1801c62fe5ccce1984d31d4fac059e116c47e090c91d15a5fe8d35" protocol=ttrpc version=3 Jul 15 05:13:50.657692 systemd[1]: Started cri-containerd-86c717b2abfa1bdde72382eb399aaf19d195d10144d417b5b5aaba8ab896e0f3.scope - libcontainer container 86c717b2abfa1bdde72382eb399aaf19d195d10144d417b5b5aaba8ab896e0f3. Jul 15 05:13:50.725943 containerd[1563]: time="2025-07-15T05:13:50.725828664Z" level=info msg="StartContainer for \"86c717b2abfa1bdde72382eb399aaf19d195d10144d417b5b5aaba8ab896e0f3\" returns successfully" Jul 15 05:13:51.454540 kubelet[2713]: I0715 05:13:51.453860 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bc7876d79-h8t8t" podStartSLOduration=59.434850295 podStartE2EDuration="1m10.453846203s" podCreationTimestamp="2025-07-15 05:12:41 +0000 UTC" firstStartedPulling="2025-07-15 05:13:39.44061293 +0000 UTC m=+76.483076978" lastFinishedPulling="2025-07-15 05:13:50.459608818 +0000 UTC m=+87.502072886" observedRunningTime="2025-07-15 05:13:51.453455908 +0000 UTC m=+88.495919966" watchObservedRunningTime="2025-07-15 05:13:51.453846203 +0000 UTC m=+88.496310251" Jul 15 05:13:51.580211 kubelet[2713]: I0715 05:13:51.580131 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9fbb49d58-4nnjz" podStartSLOduration=50.254893131 podStartE2EDuration="1m1.58011179s" podCreationTimestamp="2025-07-15 05:12:50 +0000 UTC" firstStartedPulling="2025-07-15 05:13:38.534734767 +0000 UTC m=+75.577198815" lastFinishedPulling="2025-07-15 05:13:49.859953426 +0000 UTC m=+86.902417474" observedRunningTime="2025-07-15 05:13:51.578745284 +0000 UTC m=+88.621209332" watchObservedRunningTime="2025-07-15 05:13:51.58011179 +0000 UTC m=+88.622575838" Jul 15 05:13:52.309646 kubelet[2713]: I0715 05:13:52.309286 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:13:52.360589 containerd[1563]: time="2025-07-15T05:13:52.360535048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0\" id:\"ae15364cd070f9b78bb2fc2db5484478a3f931142de361f3a162805612d91a8f\" pid:5483 exited_at:{seconds:1752556432 nanos:360229917}" Jul 15 05:13:53.105909 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:52306.service - OpenSSH per-connection server daemon (10.0.0.1:52306). Jul 15 05:13:53.406975 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 52306 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:53.408871 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:53.414225 systemd-logind[1508]: New session 15 of user core. Jul 15 05:13:53.425568 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:13:53.657176 sshd[5496]: Connection closed by 10.0.0.1 port 52306 Jul 15 05:13:53.658004 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:53.665149 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:52306.service: Deactivated successfully. Jul 15 05:13:53.668635 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:13:53.671404 systemd-logind[1508]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:13:53.673400 systemd-logind[1508]: Removed session 15. Jul 15 05:13:54.053974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066878133.mount: Deactivated successfully. Jul 15 05:13:55.046907 containerd[1563]: time="2025-07-15T05:13:55.046843050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 05:13:55.063158 containerd[1563]: time="2025-07-15T05:13:55.063093289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:55.065620 containerd[1563]: time="2025-07-15T05:13:55.065573351Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:55.066482 containerd[1563]: time="2025-07-15T05:13:55.066307218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:55.067131 containerd[1563]: time="2025-07-15T05:13:55.067100930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.607020983s" Jul 15 05:13:55.067131 containerd[1563]: time="2025-07-15T05:13:55.067130706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 05:13:55.068157 containerd[1563]: time="2025-07-15T05:13:55.068119950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 05:13:55.069381 containerd[1563]: time="2025-07-15T05:13:55.069267605Z" level=info msg="CreateContainer within sandbox \"24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 05:13:55.082702 containerd[1563]: time="2025-07-15T05:13:55.082573228Z" level=info msg="Container 2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:55.093131 containerd[1563]: time="2025-07-15T05:13:55.093077658Z" level=info msg="CreateContainer within sandbox \"24ed947cf7839844764a3509724df44aa2f0dbe1bd4437374f6ca7d3f28117d4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0\"" Jul 15 05:13:55.093726 containerd[1563]: time="2025-07-15T05:13:55.093591786Z" level=info msg="StartContainer for \"2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0\"" Jul 15 05:13:55.095656 containerd[1563]: time="2025-07-15T05:13:55.095610810Z" level=info msg="connecting to shim 2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0" address="unix:///run/containerd/s/3dd2f47a971e2698297433553946e0043ee8def46434033a22ba7d891917bd2a" protocol=ttrpc version=3 Jul 15 05:13:55.128647 systemd[1]: Started cri-containerd-2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0.scope - libcontainer container 2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0. Jul 15 05:13:55.190754 containerd[1563]: time="2025-07-15T05:13:55.190694855Z" level=info msg="StartContainer for \"2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0\" returns successfully" Jul 15 05:13:55.332935 kubelet[2713]: I0715 05:13:55.332750 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-phzb2" podStartSLOduration=55.26192388 podStartE2EDuration="1m9.332731414s" podCreationTimestamp="2025-07-15 05:12:46 +0000 UTC" firstStartedPulling="2025-07-15 05:13:40.997190203 +0000 UTC m=+78.039654251" lastFinishedPulling="2025-07-15 05:13:55.067997727 +0000 UTC m=+92.110461785" observedRunningTime="2025-07-15 05:13:55.332006253 +0000 UTC m=+92.374470331" watchObservedRunningTime="2025-07-15 05:13:55.332731414 +0000 UTC m=+92.375195462" Jul 15 05:13:55.403648 containerd[1563]: time="2025-07-15T05:13:55.403603019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0\" id:\"20c25d7cb51122a9f0cfc3ef00b5bbb7dd50ff36a230f86242c18e6bd99282a2\" pid:5591 exit_status:1 exited_at:{seconds:1752556435 nanos:402553370}" Jul 15 05:13:55.682762 containerd[1563]: time="2025-07-15T05:13:55.682621012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91\" id:\"84f4d9db8bfdc05f065f8a0e7abfcf6189c9d7a32d36ecc8d99809d89e078bd7\" pid:5618 exit_status:1 exited_at:{seconds:1752556435 nanos:682297536}" Jul 15 05:13:56.407525 containerd[1563]: time="2025-07-15T05:13:56.407472663Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0\" id:\"03a0ea34b8af56ef4499b216fa3608aa8b022376ba239786e8a79a2970af7ea1\" pid:5643 exit_status:1 exited_at:{seconds:1752556436 nanos:406994102}" Jul 15 05:13:57.069760 containerd[1563]: time="2025-07-15T05:13:57.069651833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:57.070990 containerd[1563]: time="2025-07-15T05:13:57.070908575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 05:13:57.072703 containerd[1563]: time="2025-07-15T05:13:57.072626163Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:57.075271 containerd[1563]: time="2025-07-15T05:13:57.075227793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:13:57.076036 containerd[1563]: time="2025-07-15T05:13:57.075982328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.007829485s" Jul 15 05:13:57.076107 containerd[1563]: time="2025-07-15T05:13:57.076038214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 05:13:57.078366 containerd[1563]: time="2025-07-15T05:13:57.078332239Z" level=info msg="CreateContainer within sandbox \"98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 05:13:57.107099 containerd[1563]: time="2025-07-15T05:13:57.107024269Z" level=info msg="Container 1c84260aba2ff2f7ce1b6b147ba3849e1356f721efd2698399d158106ed7f4bc: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:57.121050 containerd[1563]: time="2025-07-15T05:13:57.120999499Z" level=info msg="CreateContainer within sandbox \"98b84bb5ea3721d1d8563ad9bfdb62874d23a2be903e7772e44b8d0642434a06\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1c84260aba2ff2f7ce1b6b147ba3849e1356f721efd2698399d158106ed7f4bc\"" Jul 15 05:13:57.121698 containerd[1563]: time="2025-07-15T05:13:57.121657450Z" level=info msg="StartContainer for \"1c84260aba2ff2f7ce1b6b147ba3849e1356f721efd2698399d158106ed7f4bc\"" Jul 15 05:13:57.123417 containerd[1563]: time="2025-07-15T05:13:57.123377773Z" level=info msg="connecting to shim 1c84260aba2ff2f7ce1b6b147ba3849e1356f721efd2698399d158106ed7f4bc" address="unix:///run/containerd/s/d1b959e59a46f211cbb58d9edc0622b6cd47d36557fb66444992350dec3ffbba" protocol=ttrpc version=3 Jul 15 05:13:57.153797 systemd[1]: Started cri-containerd-1c84260aba2ff2f7ce1b6b147ba3849e1356f721efd2698399d158106ed7f4bc.scope - libcontainer container 1c84260aba2ff2f7ce1b6b147ba3849e1356f721efd2698399d158106ed7f4bc. Jul 15 05:13:57.314739 containerd[1563]: time="2025-07-15T05:13:57.314684127Z" level=info msg="StartContainer for \"1c84260aba2ff2f7ce1b6b147ba3849e1356f721efd2698399d158106ed7f4bc\" returns successfully" Jul 15 05:13:57.445863 kubelet[2713]: I0715 05:13:57.445767 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2v6vx" podStartSLOduration=48.073311678 podStartE2EDuration="1m7.44574727s" podCreationTimestamp="2025-07-15 05:12:50 +0000 UTC" firstStartedPulling="2025-07-15 05:13:37.704614979 +0000 UTC m=+74.747079027" lastFinishedPulling="2025-07-15 05:13:57.077050561 +0000 UTC m=+94.119514619" observedRunningTime="2025-07-15 05:13:57.44557354 +0000 UTC m=+94.488037588" watchObservedRunningTime="2025-07-15 05:13:57.44574727 +0000 UTC m=+94.488211318" Jul 15 05:13:58.301365 kubelet[2713]: I0715 05:13:58.301308 2713 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 05:13:58.301365 kubelet[2713]: I0715 05:13:58.301357 2713 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 05:13:58.613174 kubelet[2713]: I0715 05:13:58.613100 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:13:58.672580 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:39234.service - OpenSSH per-connection server daemon (10.0.0.1:39234). Jul 15 05:13:58.758840 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 39234 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:13:58.761359 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:58.767146 systemd-logind[1508]: New session 16 of user core. Jul 15 05:13:58.773674 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:13:59.003339 sshd[5695]: Connection closed by 10.0.0.1 port 39234 Jul 15 05:13:59.003647 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:59.008719 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:39234.service: Deactivated successfully. Jul 15 05:13:59.010829 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:13:59.011858 systemd-logind[1508]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:13:59.013157 systemd-logind[1508]: Removed session 16. Jul 15 05:14:04.024341 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:39236.service - OpenSSH per-connection server daemon (10.0.0.1:39236). Jul 15 05:14:04.081880 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 39236 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:04.084301 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:04.089863 systemd-logind[1508]: New session 17 of user core. Jul 15 05:14:04.104733 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:14:04.265382 sshd[5716]: Connection closed by 10.0.0.1 port 39236 Jul 15 05:14:04.266055 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:04.272185 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:39236.service: Deactivated successfully. Jul 15 05:14:04.274751 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:14:04.275808 systemd-logind[1508]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:14:04.277289 systemd-logind[1508]: Removed session 17. Jul 15 05:14:07.887702 containerd[1563]: time="2025-07-15T05:14:07.887643187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0\" id:\"c053d77e60ee4d7ac3366b134b9ea9ba0dfeeea4fcffa1b7e8c7c0d3f0888281\" pid:5741 exited_at:{seconds:1752556447 nanos:887202020}" Jul 15 05:14:09.281639 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:43192.service - OpenSSH per-connection server daemon (10.0.0.1:43192). Jul 15 05:14:09.359874 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 43192 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:09.362052 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:09.368003 systemd-logind[1508]: New session 18 of user core. Jul 15 05:14:09.376705 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:14:09.528057 sshd[5758]: Connection closed by 10.0.0.1 port 43192 Jul 15 05:14:09.528489 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:09.533473 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:43192.service: Deactivated successfully. Jul 15 05:14:09.535789 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:14:09.536766 systemd-logind[1508]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:14:09.538536 systemd-logind[1508]: Removed session 18. Jul 15 05:14:14.544693 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:43196.service - OpenSSH per-connection server daemon (10.0.0.1:43196). Jul 15 05:14:14.610814 sshd[5777]: Accepted publickey for core from 10.0.0.1 port 43196 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:14.612752 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:14.617640 systemd-logind[1508]: New session 19 of user core. Jul 15 05:14:14.627574 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:14:14.758013 sshd[5780]: Connection closed by 10.0.0.1 port 43196 Jul 15 05:14:14.760170 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:14.772171 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:43196.service: Deactivated successfully. Jul 15 05:14:14.775227 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:14:14.776212 systemd-logind[1508]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:14:14.781192 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:43198.service - OpenSSH per-connection server daemon (10.0.0.1:43198). Jul 15 05:14:14.782375 systemd-logind[1508]: Removed session 19. Jul 15 05:14:14.833082 sshd[5793]: Accepted publickey for core from 10.0.0.1 port 43198 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:14.835344 sshd-session[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:14.840705 systemd-logind[1508]: New session 20 of user core. Jul 15 05:14:14.852721 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:14:15.199601 sshd[5796]: Connection closed by 10.0.0.1 port 43198 Jul 15 05:14:15.200085 sshd-session[5793]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:15.211989 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:43198.service: Deactivated successfully. Jul 15 05:14:15.214798 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:14:15.215699 systemd-logind[1508]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:14:15.219240 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:43200.service - OpenSSH per-connection server daemon (10.0.0.1:43200). Jul 15 05:14:15.220675 systemd-logind[1508]: Removed session 20. Jul 15 05:14:15.295601 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 43200 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:15.297858 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:15.303296 systemd-logind[1508]: New session 21 of user core. Jul 15 05:14:15.312662 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:14:15.865600 sshd[5811]: Connection closed by 10.0.0.1 port 43200 Jul 15 05:14:15.866206 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:15.876378 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:43200.service: Deactivated successfully. Jul 15 05:14:15.878639 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:14:15.879618 systemd-logind[1508]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:14:15.884595 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:43206.service - OpenSSH per-connection server daemon (10.0.0.1:43206). Jul 15 05:14:15.886546 systemd-logind[1508]: Removed session 21. Jul 15 05:14:15.964275 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 43206 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:15.966792 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:15.973247 systemd-logind[1508]: New session 22 of user core. Jul 15 05:14:15.985742 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:14:16.475790 sshd[5835]: Connection closed by 10.0.0.1 port 43206 Jul 15 05:14:16.476331 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:16.488854 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:43206.service: Deactivated successfully. Jul 15 05:14:16.491622 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:14:16.492913 systemd-logind[1508]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:14:16.499111 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:43214.service - OpenSSH per-connection server daemon (10.0.0.1:43214). Jul 15 05:14:16.502229 systemd-logind[1508]: Removed session 22. Jul 15 05:14:16.568711 sshd[5847]: Accepted publickey for core from 10.0.0.1 port 43214 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:16.571129 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:16.577596 systemd-logind[1508]: New session 23 of user core. Jul 15 05:14:16.587716 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:14:16.713931 sshd[5850]: Connection closed by 10.0.0.1 port 43214 Jul 15 05:14:16.714279 sshd-session[5847]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:16.719740 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:43214.service: Deactivated successfully. Jul 15 05:14:16.722151 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:14:16.723132 systemd-logind[1508]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:14:16.724351 systemd-logind[1508]: Removed session 23. Jul 15 05:14:21.732235 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:40406.service - OpenSSH per-connection server daemon (10.0.0.1:40406). Jul 15 05:14:21.784433 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 40406 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:21.786696 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:21.792169 systemd-logind[1508]: New session 24 of user core. Jul 15 05:14:21.805713 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:14:21.931878 sshd[5869]: Connection closed by 10.0.0.1 port 40406 Jul 15 05:14:21.932264 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:21.936031 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:40406.service: Deactivated successfully. Jul 15 05:14:21.938302 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:14:21.939938 systemd-logind[1508]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:14:21.941444 systemd-logind[1508]: Removed session 24. Jul 15 05:14:22.366662 containerd[1563]: time="2025-07-15T05:14:22.366522819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28cef55ad4001f1114d58456dd5ef87b516a3b13456a237ba4e9c00d89e313a0\" id:\"1bf0acfb8c80ebff643d5f4a9b73ccc8fc4b90a1288018549abd23da7820a4c6\" pid:5893 exited_at:{seconds:1752556462 nanos:366240916}" Jul 15 05:14:25.661556 containerd[1563]: time="2025-07-15T05:14:25.661450728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c5b087de4259ae4e44ef01a63bd937e0664272fd4d9eb6c5253344a4110cb91\" id:\"5de62124594440f1bfb4ef108b67d3dafc3fc8ae1674787cdec05e5d126a3570\" pid:5919 exited_at:{seconds:1752556465 nanos:661068175}" Jul 15 05:14:26.405342 containerd[1563]: time="2025-07-15T05:14:26.405260177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a6ded77f52bd9136a8b346145b0abb3903b477d45eeacaa6921855a33fccbb0\" id:\"71f9338a25d1cd81edf16db09a3eb407f04e418eb4a95da62b4ba3001c8b2bb4\" pid:5943 exited_at:{seconds:1752556466 nanos:404951373}" Jul 15 05:14:26.950998 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:40410.service - OpenSSH per-connection server daemon (10.0.0.1:40410). Jul 15 05:14:27.015678 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 40410 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:27.017843 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:27.024802 systemd-logind[1508]: New session 25 of user core. Jul 15 05:14:27.035871 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 05:14:27.167907 sshd[5961]: Connection closed by 10.0.0.1 port 40410 Jul 15 05:14:27.168304 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:27.173976 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:40410.service: Deactivated successfully. Jul 15 05:14:27.176477 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 05:14:27.177501 systemd-logind[1508]: Session 25 logged out. Waiting for processes to exit. Jul 15 05:14:27.179363 systemd-logind[1508]: Removed session 25. Jul 15 05:14:32.180669 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:42380.service - OpenSSH per-connection server daemon (10.0.0.1:42380). Jul 15 05:14:32.276476 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 42380 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:32.278759 sshd-session[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:32.286336 systemd-logind[1508]: New session 26 of user core. Jul 15 05:14:32.291565 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 05:14:32.514984 sshd[5983]: Connection closed by 10.0.0.1 port 42380 Jul 15 05:14:32.515334 sshd-session[5980]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:32.521073 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:42380.service: Deactivated successfully. Jul 15 05:14:32.524210 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 05:14:32.525242 systemd-logind[1508]: Session 26 logged out. Waiting for processes to exit. Jul 15 05:14:32.528073 systemd-logind[1508]: Removed session 26. Jul 15 05:14:37.532234 systemd[1]: Started sshd@26-10.0.0.51:22-10.0.0.1:42386.service - OpenSSH per-connection server daemon (10.0.0.1:42386). Jul 15 05:14:37.590215 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 42386 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:14:37.592345 sshd-session[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:14:37.598475 systemd-logind[1508]: New session 27 of user core. Jul 15 05:14:37.606178 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 05:14:37.848828 sshd[5999]: Connection closed by 10.0.0.1 port 42386 Jul 15 05:14:37.849088 sshd-session[5996]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:37.853943 systemd[1]: sshd@26-10.0.0.51:22-10.0.0.1:42386.service: Deactivated successfully. Jul 15 05:14:37.856114 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 05:14:37.857094 systemd-logind[1508]: Session 27 logged out. Waiting for processes to exit. Jul 15 05:14:37.858345 systemd-logind[1508]: Removed session 27. Jul 15 05:14:38.051515 kubelet[2713]: E0715 05:14:38.051390 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"